> extremely hostile to anything resembling stability
Remix has one of the best stars/issue ratios I've ever seen in a large project. They have 27k stars for 228 issues (118 stars/issue). Meanwhile next.js has 121k stars for 2678 issues (45 stars/issue).
I want to preface this by saying I don't have a dog in this fight. Not a big react fan in general although I write it for work, and have no opinions on specific routers or 3rd-party state libraries being better or worse than others.
With that out of the way, where is the research showing that this is a good indicator of software reliability? Highly starred projects are left unfinished and abandoned all the time. The thread above goes into details about upgrade challenges. So it seems stars are maybe not such a good indicator of reliability. What about using a star to issue ratio? I've personally seen multiple projects with 10k+ stars that hide their bug reports in the discussions tab. Terraform has a much worse ratio, but would you describe it as much less reliable?
I agree that stars alone don't equate to reliability, which is why I used (the very much imperfect, but better) stars/issue as part of the evidence.
While I'm unaware of research supporting stars/issue, it's a useful rule-of-thumb at the extremes. With React (318), if something isn't working, your code is almost certainly wrong. With Bun (24), if something isn't working, it's likely Bun (sorry, Jarred).
The assertion the Remix creators are "extremely hostile to anything resembling stability" should place Remix at the lowest extreme, but that isn't so.
If you use a metric or rule-of-thumb for evaluating reliability, I'm curious to hear it.
Gotcha so that is sort of a precursor to a deeper inquiry it sounds like. I'm concerned it sounds like there may be a strong bias in favor of popularity. Maybe that is less of an issue for developers already shopping within a specific stack/philosophy though.
I tend to try and ignore stars, because I've been burned by popular and unpopular projects alike. They also seem to be gamed at times. For an initial evaluation of something I might actually use I look at who the owners are, when the project was last updated, and explore the issue tracker (including closed tickets). That is usually enough to identify projects with major problems. If they pass the initial audit, then the process continues.
You do have to upgrade. At some point incompatibilities between libraries arise, and if neglected for too long at some point it becomes impossible to update everything. Keeping dependencies up to date is an important part of software maintenance.
In our case we wanted to upgrade another dependency (react-admin) which necessitated a react-router upgrade, which we used throughout the rest of our application, so it was a reasonable amount of work and sticking to the old one was not an option.
I don't think issues are a good measurement for popular repos. Majority of them are just basic skill issues and inability to read 2 pages of docs.
Especially next community seems to be full of people who don't seem to know anything about web development complaining because they don't know how cookies work or something...
Using “star / issue ratio” is a basis for your point is just an indicator that you’re too deep into the JS ecosystem. It reads into weird GitHub metrics to a unique degree.
Remix has one of the best stars/issue ratios I've ever seen in a large project. They have 27k stars for 228 issues (118 stars/issue). Meanwhile next.js has 121k stars for 2678 issues (45 stars/issue).
Remix also has a robust future flag system to ensure nothing breaks during transitions. https://github.com/remix-run/remix/issues/4598
For larger changes (like the controversial flat routes), they even maintain the old feature so you don't have to upgrade. https://www.npmjs.com/package/@remix-run/v1-route-convention
I don't think the evidence supports your assertion.