Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using LLM for this kind of thing is like using cheats or aim assist for online games. So yes, this is prime example of enshifittication.


enshittification isn't "things become worse" - it's the specific process of how services worsen in 3 stages:

> Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two-sided market", where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.


Just because that's how it was defined in a blog post doesn't mean that it's the only way to use it


We already have plenty of words for the ancient and generic concept of things-getting-worse, like "ruining".


I think the intent is important. Using LLMs to do well on the public leaderboard is like using cheats/aim assist. But learning how to use LLMs to solve complex puzzles independent of any sense of "competition" is more like when people train neural networks to drive a car in GTA or something - it's not hurting anyone and it can be a real learning experience that leads to other interesting byproducts.

But, yeah, don't use LLMs to try and get 9 second solve times on the public leaderboard, it's not in the spirit of the thing and is more like taking a dictionary to a spelling bee.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: