> I will click the green "Play" button, it will change to a blue "Stop" button, as if the application was running, then shortly after silently switches back to the green Play button again, without any visible error and without actually starting the game.
You might want to enable Proton logging and have a look at what it says is going on.
> Honestly, don't use debian for gaming, as it is too far behind. Gaming stuff needs a bit more bleeding edge packages.
Please stop spreading this misconception. There are only a tiny handful of packages that a Debian gamer might need to update, and those are generally available in Debian Backports. It's not what I would call a beginner distro for any purpose, but gaming on it is perfectly viable.
I'm having a good time in games, still getting other computing tasks done, and enjoying Debian's low-maintenance respect for my time. AMA.
This is true, but you may be missing out on performance and compatibility improvements from recent ("bleeding edge") drivers. You need recent hardware for this to be relevant.
Generally speaking, you don't need rock-solid stability on a gaming rig or even a "workstation," since uptime isn't really a consideration. I run Debian on my home server, but my other machines, including a backup laptop, all run Arch. A good Arch setup is incredibly solid.
Your computer's bluetooth module could be the source of the trouble. Some people have found that using a different dongle fixed their wireless controller problems.
> The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.
It could also be lucky consequence of what games you play and what else you do with your computer.
I was a long-time Nvidia user, and had plenty of problems with their drivers. They ranged from minor annoyances when switching between virtual consoles (which some people never do) to total system freezes when playing a particular game (which some people never play). It would have been be easy for someone else to never encounter these problems.
Since switching to AMD a couple years ago, I have been much happier.
I wish the sample text included _underscores_, since I have occasionally found that they disappear with certain combinations of font + size + renderer.
And a run of all the numeric digits 0123456789, to show how their heights align.
And [square brackets], to show how easily they are distinguished from certain other glyphs.
And the vertical | bar, for the same reason.
...
Adobe Source Code Pro and Ubuntu Mono were my finalists. I think my preference would come down to window and font size, since Ubuntu Mono seemed to be narrower and leave more space between lines.
(Also, I kind of rushed the first few comparisons, so it's possible that I prematurely eliminated a typeface that I would have liked more.)
>I think I like the idea, but I can't help wondering if it would have unforeseen consequences.
As I said in a sibling comment, quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion as opposed to "final bill that has been revised in committee and is going to the floor for a full vote". The details of implementation are certainly critical, and not trivial either! I'm fully in support of thinking through various use cases. But part of why I'm interested in alternate approaches is that they might give us finer grained tools.
>Could this approach undermine the protections afforded by open-source licenses? (IANAL.)
I have actually considered that as well but didn't add it into a quickie comment. If we take the second path of approaches I listed there, then thinking about it all open source software would fall under a special even more permissive class of the tier 3, in that it already has "fair, reasonable and non-discriminatory" licensing for all right? Except that it's also free. The motivation here is the "advancement of the useful arts & sciences" and the public good, so having it be explicit that "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."
All that said, I'll also ask fwiw if it'd even be that big a deal given the pace of development? I do think it'd be both ideal and justified if OSS had a longer period for free, that's still a square deal to the public IMO. But like, even if an OSS work went out protection (and keep in mind that a motivated community that could raise even a few thousand dollars would be able to just pay for an extra decade no problem, the cost doesn't really ramp up for awhile [which might itself be considered a flaw?]) after 10 years, how much is it worth it that 2016 era OSS (and no changes since remember, it's a constantly rolling window) now could have proprietary works be worth it against 10 year old proprietary software all getting pushed into the public domain far faster? That's worth some contemplation. Maybe requiring that source/assets be provided to the Library of Congress or something and is released at the same time the work loses copyright would be a good balance, having all that available for down the road would be a huge win vs what we've seen up until now.
> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion
Agreed, and my comment was aimed at exactly that. :)
An example of my concern: What would happen to GPL-licensed software if the copyright expired quickly? Would that allow someone to include it in a proprietary product and (after the short copyright term ended) deny users the freedoms that the GPL is supposed to guarantee? I think those freedoms remain important for much longer than 10 years.
> (and no changes since remember, it's a constantly rolling window)
Do you mean that the copyright term countdown would reset whenever the author makes changes to their work? (I'm not sure if this is the case today.) If so, couldn't someone simply use an earlier version in their proprietary product in order to escape GPL obligations early?
> "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."
Yes, I think this makes sense. Thanks for sharing your thoughts.
> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion
Indeed.
Setting aside variable details like time frames and cost structures which can be debated separately, what I found interesting about your suggestion is it's a mechanism to create an escalating incentive for copyright holders to relinquish copyrights even sooner than the standard copyright period. Currently, no matter what the term length, it costs nothing to sit on a copyright until it expires - so everyone does - even if they never do anything with the copyright. And the copyright exists even if the company goes bankrupt or the copyright holder dies. Thus we end up with zombie copyrights which keep lurking in the dark for works which are almost certainly abandon-ware or orphan-ware simply because our current system defaults to one-and-done granting of "life of the inventor + 70 years" for everything.
Obviously, we should dramatically shorten the standard copyright length but no matter what we shorten it to (10, 15, 20 yrs etc) we should consider requiring some recurring renewal before expiration as a separate idea. Even if it's just paying a small processing fee and sending in simple DIY form, it sets the do-nothing-default to "auto-expire" for things the inventor doesn't care about (and may even have forgotten about). That's a net benefit to society we should evaluate separately from debates about term lengths.
I see your suggestion about automatically escalating the cost of recurring renewal as another separate layer worth considering on its own merits. My guess would be just requiring any recurring renewal would cause around half of all copyrights to auto-expire before reaching their full term - even if the renewal stayed $10. The idea of having recurring renewal costs escalate, regardless of when the escalation kicks in, or how much it escalates, is a mechanism which could achieve even more net positive societal benefits by increasing the incentive to relinquish copyrights sooner.
The common gaming-focused Wine/Proton builds can also use esync (eventfd-based synchronization). IIRC, it doesn't need a patched kernel.
The point being that these massive speed gains will probably not be seen by most people as you suggest, because most Linux gamers already have access to either esync or fsync.
Maybe you are right about esync but anyway I would also gather a lot of people don’t have that either. At least personally I don’t bother with custom proton builds or whatever so if Valve didn’t enable that on their build then I don’t have it.
> if Valve didn’t enable that on their build then I don’t have it.
The Proton build is Valve's build. It supports both fsync and esync, the latter of which does not require a kernel patch. If you're gaming on Linux with Steam, you're probably already using it.
I would assume most of them? I'd be surprised if distros like Debian, Ubuntu, Fedora, etc. would ship non-mainline kernel features like that.
Sure, gaming-focused distros, or distros like Arch or Gentoo might (optionally or otherwise), but mainstream? Probably not.
Of course, esync doesn't require kernel patches, so I imagine that was more broadly out there. But it sounds like fsync got you performance pretty close to what ntsync can do, but esync was quite a bit behind both? With vanilla being quite a bit behind esync?
(Also, jeez, fsync, what a terrible name. fsync is a syscall that has to do with filesystem data. So confusing.)
Last I checked, every distro of note had its own patchset that included stuff outside the vanilla kernel tree. Did that change? I admit I haven't looked at any of that in... oh, 15 years or so.
> He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
From GPL2:
> The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
Is a project's test suite not considered part of its source code? When I make modifications to a project, its test cases are very much a part of that process.
If the test suite is part of this library's source code, and Claude was fed the test suite or interface definition files, is the output not considered a work based on the library under the terms of LGPL 2.1?
Legally, using the tests to help create the reimplementation is fine.
However, it seems possible you can't redistribute the same tests under the MIT license. So the reimplementation MIT distribution could need to be source code only, not source code plus tests. Or, the tests can be distributed in parallel but still under LGPL, not MIT. It doesn't really matter since compiled software won't be including the tests anyways.
Sorry, I misspoke. Transformation is what makes the LLM itself legal -- its training data is sufficiently transformed into weights.
And so, a work being sufficiently transformative is one way in which copyright no longer applies, but that's not the case here specifically. The specific case here is essentially just a clean-room reimplementation (though technically less "clean", but still presumably the same legally). But the end result is still a completely different expression of underlying non-copyrightable ideas.
And in both cases, it doesn't matter what the original license was. If a resulting work is sufficiently transformative or a reimplementation, copyright no longer applies, so the license no longer applies.
The library's test suite and interfaces were apparently used directly, not transformed. If either of those are considered part of the library's source code, as the license's wording seems to suggest, then I think output from their use could be considered a work based on the library as defined in the license.
Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
Test suites are again generally considered copyrightable... but the behavior being tested is not.
So no, it's not considered to be a work based on the library. This seems pretty clear-cut in US law by now.
Also, the LGPL text doesn't say "work based on the library". It says "If you modify a copy of the Library", and this is not a "combined work" either. And the whole point is that this is not a modified copy -- it's a reimplementation.
In theory, a license could be written to prevent running its tests from being run against software not derived from the original, i.e. clean-room reimplementations. In practice, it remains dubious whether any court would uphold that. And it would also be trivial to then get around it, by taking advantage of fair use to re-implement the tests in e.g. plain English (or any specification language), and then re-implementing those back into new test code. Because again, test behaviors are not copyrightable.
> Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
That was only one prong of the four fair use considerations in that case. Look at Breyer's opinion, it does not say that copying APIs is fair use if implementations are independent, just that Google's specific usage in that instance met the four fair use considerations.
There are likely situations in which copying APIs is not fair use even if function implementations are independent, Breyer looked at substantiality of the code copied from Java, market effects and purpose and character of use.
If your goal is to copy APIs, and those APIs make up a substantial amount of code, and reimplement functions in order to skirt licenses and compete directly against the source work, or replace it, those three considerations might not be met and it might not be fair use. Breyer said Google copied a tiny fraction of code (<1%), its purpose was not to compete directly with Oracle but to build a mobile OS platform, and Google's reimplementation was not considered a replacement for Java.
Google vs Oracle ruled that APIs fall under copyright (the contrary was thought before). However, it was ruled that, in that specific case, fair use applied, because of interoperability concerns. That's the important part of this case: fair use is never automatic, it is assessed case by case.
Regarding chardet, I'm not sure "I wanted to circumvent the license" is a good way to argue fair use.
You might want to enable Proton logging and have a look at what it says is going on.
https://github.com/ValveSoftware/Proton/?tab=readme-ov-file#...
reply