I don't think the original comment was saying this isn't a problem but that flagging it as a hallucination from an LLM is a much more serious allegation. In this case, it also seems like it was done to market a paid product which makes the collateral damage less tolerable in my opinion.
> Papers should be carefully crafted, not churned out.
I think you can say the same thing for code and yet, even with code review, bugs slip by. People aren't perfect and problems happen. Trying to prevent 100% of problems is usually a bad cost/benefit trade-off.
I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.
Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.
If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.
Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.
I agree that is a strange notion that would require some evidence and I see it in some other threads but looking at the parent comments going up it seems people are discussing economic usefulness so that is what I'm responding to.
I think this is true with an arm mac (and would be tricky to fix that, props to the Asahi folks for doing so much) but for a lot of other hardware (recent dell/asus/lenovo, framework, byo desktops) I find Linux complete. I'm sure there is hardware out there that with struggles but I've not had to deal with any issues for a few years now myself.
I think with the right parental guidance/supervision this could be a very fun toy.
From the website it seems like a great way to generate some black and white outlines that kids can still color in. If used like that it seems almost strictly more creative than a coloring book, no? There are plenty of other ways kids can express creativity with pre-made art too. Maybe they use them to illustrate a story they dreamed up? Maybe they decorate something they built with them?
Also, some children might want to have fun be creative in ways that don't involve visual arts. I was never particularly interested in coloring or drawing and still believe myself to be a pretty creative individual. I don't think my parents buying me some stickers robbed me of any critical experience.
I agree with you, American schools seem particularly bad at breeding these sorts of unhealthy dynamics, and we shouldn't accept it as normal. But even in a better environment, unstructured social interaction with peers still seems like a useful part of growing up/socialization and shouldn't be replaced with kids sucked into their phones.
The beef isn't with systemd upstream which already has a very simple/boring workaround for this, it's with the debian package maintainer (some people here are wearing multiple hats).
Really the whole raison d'etre of debian is move at this pace to prioritize stability/compatibility. If you don't like that philosophy there are other distros but a package maintainer's primary job is to repackage software for that distro (which presumably users have chosen for a reason), not comply with upstream.
Agreed, this is a common division of labor and simplifies things. It's not entirely clear in the postmortem but I speculate that the conflation of duties (i.e. the enactor also being responsible for janitor duty of stale plans) might have been a contributing factor.
I would divide these as functions inside a monolithic executable. At most, emit the plan to a file on disk as a “—whatif” optional path.
Distributed systems with files as a communication medium are much more complex than programmers think with far more failure modes than they can imagine.
Doing it inside a single binary gets rid of some of the nice observability features you get "for free" by breaking it up and could complicate things quite a bit (more code paths, flags for running it in "don't make a plan use the last plan mode", flags for "use this human generated plan mode"). Very few things are a free lunch but I've used this pattern numerous times and quite like it. I ran a system that used a MIP model to do capacity planning and separating planning from executing a plan was very useful for us.
I think the communications piece depends on what other systems you have around you to build on, its unlikely this planner/executor is completely freestanding. Some companies have large distributed filesystems with well known/tested semantics, schedulers that launch jobs when files appear, they might have ~free access to a database with strict serializability where they can store a serialized version of the plan, etc.
If the only way I interact with a service is a single app then I want that app to blend into my phone. I don't care if the Uber app on Android and iOS are the same, I only see one of them. If I have to use a service on many different platforms, I sometimes prefer having a consistent design language, e.g. I like that Slack has a consistent sidebar interface everywhere. I want to go from the browser to tablet to phone and not have anything in a different spot.
> Papers should be carefully crafted, not churned out.
I think you can say the same thing for code and yet, even with code review, bugs slip by. People aren't perfect and problems happen. Trying to prevent 100% of problems is usually a bad cost/benefit trade-off.