The entire Rust ecosystem for gamedev is kind of like that - it was only recently that people really started writing games entirely in Rust, and even now there are very few actual released commercial games out there. I think there's a bit of a loop happening where it's like, until Rust is considered "stable" for serious gamedev work, libraries will be developed in this kind of "unstable" way where breaking changes should be expected at any time, but at the same time, that instability is making it less of a serious contender for gamedev work.
Forget about gamedev. An anecdote: a friend wrote a personal Rust project which purely works with website's APIs and some minor file operations (basically a scraper).
And I, as his only user, have to constantly update my local Rust installation to newest dev to be able to compile his newest code base.
I understand it's more about him being on edge than the language itself alone -- but it still feels insane for me. In most of other languages, you can't make your code so backwards incompatible even if you actively try.
It's not more, it's 100% on your friend, I'm afraid. One is not supposed to jump to the newest version every time simply because one can. Updating the minimum supported Rust version is a breaking change that should not be done lightly.
> Updating the minimum supported Rust version is a breaking change
This is not the majority sentiment, for better or for worse. The project stopped releasing "what version of Rust do you use most" in the survey results in 2020 (which is a shame, imho) the vast, vast majority of folks use the latest stable, and very few use a previous version of stable. I have no real reason to believe this overall trend has changed since then.
This is especially true for an application, which this project seems to be. Libraries may have more of a reason to explicitly lag behind the latest, but there's not really a good reason to bother if nobody else is going to depend on your code.
That makes sense to me. I usually use asdf to version Rust in personal projects, so I'm not always on the latest. But when I work on a new release I'll almost always update rust and any of my other dependencies, as I'd otherwise never keep them up-to-date. Or if a new language feature I want to try just came out.
And God forbid you fall behind on React Native major releases in a project. I think the Expo project's solution to this is for all of the native configuration be done via plugins and cross-platform config[0]. Otherwise you'll be dealing with large diffs of native Android/iOS code that would have even native mobile devs tread carefully.
> Updating the minimum supported Rust version is a breaking change that should not be done lightly.
Someone should tell that to Rust library devs, because from what I understand, their packaging ecosystem pretty much always assumes that all libraries are running on the latest version of Rust.
There's no "commonly used versions" or LTS builds for Rust, so it seems that every packaging maintainer just supports the latest and everyone is up shitcreek if a new version changes and hard breaks code for older Rust. With how micropackage oriented Rust is, you practically can only jump to the newest version if you want your builds to keep working since I doubt you'll be using stdlib-only Rust.
Compare and contrast every (anecdotal) Rust users favorite bugbear, Python, where if you're building a library, it's pretty commonly assumed that you can follow one of the two major deprecation points for Python versions (end of bugfixes or end of security updates), which means most major libraries usually follow one of the two for the internal features (and therefore lowest version) they use.
I've only done a few hello world style projects in rust, and my python experience is also almost 8 yrs ago now, so I might be wrong... But doesn't rust pull it's dependencies as bytecode vs python (which is a script language at its core) that pulls the source code?
The bytecode would only ever work with the VM that it's been compiled against I think, which sounds to me like the real issue here...?
I don't think you'd have this issue if you'd fetch your rust dependencies via git and then build them locally.
And pre building for every version is expensive, which is likely why the rust ecosystem doesn't really do it right now.
And I doubt it's gonna change unless a MAMA (Microsoft, Apple, Meta, Alphabet) company "embraces" it... The EEE style
Rust does not have a VM, and doesn't use bytecode.
That being said, it could in theory serve them as binary code, which yes, would then mean that it would need to be stored per version and per architecture both, which would be quite a lot more than the source that's stored now.
> Updating the minimum supported Rust version is a breaking change that should not be done lightly.
Why? As I see, requiring a higher minor version of your dependency including the compiler is not a breaking change (and does not warrant a major version bump wrt semver), as there should be no problem in bumping the minor version of any dependency, even if it's used for different packages.
It is also about how easy rust makes it to use libraries, and depend upon external libraries as a part of your public API. C in particular makes this pretty inconvenient,
so a lot of projects include data structures and things that otherwise might be pulled in via library themselves. That lends itself towards less reliance on external deps but also more internal stability.
At the very least I can say, that experiences I've had that are common in rust, where I have dependency X needing to be upgraded in lockstep (the same version) across multiple external dependencies with different maintainers. Are things that I've basically never experienced with C based languages.
At least I don't think it is just that things need time to stabilize, but also just the packaging system causes network effects which you don't get the joy of experiencing if you don't have one.
> Rust has lock files, so breaks will only happen when you decide they are allowed to happen, by running cargo update.
great in theory but ``cargo install`` will straight up ignore lockfiles by default[0], meaning trying to install a binary package is a roulette of "did this tiny package update before everyone in the dependency tree noticed"
This is the right answer. Crates.io comes with a whole mixed bag of experiences. Increases initial project development velocity at the expense of long-term velocity (and sanity). After 2.5 years of doing Rust full-time it's the one thing I'd look back and say I'd rather see done entirely different.
What do you think some of the alternatives for library distribution and management are that would be more successful. I am an unabashed proponent of the C style manage t dependencies yourself in a manner consistent for your own work, but many people are 100% adverse to that position. Do you think a middle ground would be possible?
Then vendor your dependencies in Rust. If you don't want libraries to change and the depenency maintainer won't commit to stable API, you have to vendor it.
I am currently working on a Rust project of about 70 crates that builds with Bazel. I ended up with vendoring everything because of very subtle bugs in rules_rust for Bazel that caught me off guard and seem to happen on non vendored deps. Fun fact, all Rust code in rules_rust is vendored by default so it's clearly the way to go.
Also, previously I had weird quirks occuring randomly, but after having switched to vendored deps, everything works stable, reliable, and as expected.
I don't agree with the fast pace of how often certain crates produce breaking changes, but I found that with Bazel at least stuff works all the time regardless. Also, builds are noticably faster because downloading and building sub dependencies can take considerably time in large projects.
I prefer "vendored" self managed dependencies as well. However the Rust ecosystem will fight fight fight you in this regard.
I am leaning towards starting future Rust personal projects on buck or bazel or gn, instead of Cargo. And checked-in vendored dependencies.
It's not a popular position, but I have seen the crates explosion in larger projects go so wrong... And 10 years at Google taught me that vendored third party deps is an entirely reasonable approach for that.
The vendoring support in Cargo isn't really what I'm looking for. It's more "download all my deps from crates.io and make local copies" and it has various limitations I kept running into which I don't have time this morning to go back and document. It really felt like an afterthought and not at all like the third_party workflow that people who have worked with the Google monorepo would be used to.
I am interested in what vendoring means other than “put all of my dependencies’ code in my repo” but if you don’t have time, that’s chill. I’ve only seen open source projects using buck/bazel and haven’t ever seen google3, so I’m sure I’m missing something.
Not OP but a difference is in who manages the updating of a vendored dep. You don’t want to update every time there’s a release, but you don’t want to never update either.
If upstream (which you’ve vendored) releases a security fix, how does your system capture and act on that event?
If vendoring just means we periodically pull a copy of upstream into our repo then you can have unbounded time when you don’t know there’s a vuln and therefore haven’t considered whether you need to act.
This is obv different from the situation where we all just cargo install (without —locked) and different from c style dep mgmt where we get sec fixes for many libs via our OS patching.
> If upstream (which you’ve vendored) releases a security fix, how does your system capture and act on that event?
You still retain the Cargo.toml and Cargo.lock, so the exact same way as if you didn't vendor: `cargo-audit` would inform you, you'd update the version, and re-vendor.
The cargo-audit I referred to in my previous post is that tooling, it's commonly run in CI regularly.
But, also, this is pretty far afield from my original question: I understand why keeping copies of your dependencies can introduce various things you should handle, but my original question was "what is vendoring your dependencies if not 'keeping a copy of the source code of your dependencies in the repository'"? That's my understanding of the definition of "vendoring," so I was curious what my original parents' definition was.
There's nothing preventing you from using the defined-by-spec-but-not-actually implemented C++23 methods, either. You can write perfectly spec-compliant C++ code that will only work on the latest version of MS C++ because other compilers don't support the feature yet. Shortly after new C++ specs are released, it's even possible to write spec-compliant code that no production compiler can even compile!
It's just that developers in the slow languages don't do that often.
With the guarantee to be able to build older projects (version specified in the Cargo.toml file) and only one real mainstream compiler (the "can compile code but lacks any of the language checks" GCC version doesn't really count), there are few downsides to always having the latest compiler, something other languages sometimes struggle with.
Coming from Rust, I made the foolish mistake of installing the latest Clang and GCC, only to find out that tons of software suddenly stopped compiling because apparently the code requires old compiler bugs/features to work (that are not emulated in newer versions). I've had similar issues with Java, where JRE 17 or 21 could not compile Java 8 projects without modifications to the project because of the new modules feature.
In an environment where updating your compiler toolkit is basically worry-free, it's a lot easier to use recent or even unstable functions. This invites developers to make use of recently added APIs, whereas developers from other ecosystems will wait months or years before even considering new APIs that have been stabilised.
I don't understand, it's recommended practice to update your Rust install to the latest whenever there's a new version, because Rust itself is wholly backwards compatible, so 1.0 code should continue to compile with whatever the latest version is. Are you saying the libraries he's using are deprecating functionality instead of the core language? If so, then yeah I've had that issue too with certain libraries like clap.
I mean, so what? I think in actuality what they're implying is that one shouldn't update Rust to latest, even though that's the recommendation. So it shouldn't matter what the friend does, as long as they're always on latest Rust. In fact, the friend is in the right, in this scenario.
> It might not be obvious how much effort it takes to manage bugfixes in dependencies where every few weeks there's a new breaking API change in egui/winit/wgpu, and while these probably seem extremely minor to those who spend all their time building an engine on top of said libraries, sinking a day or two in figuring things out and fixing stuff on every release is a gigantic waste of time in my view.
But not really, because the person we are talking about is, if I understand correctly, a user of the software their friend wrote, not someone who is implementing any features themselves and thus needs to rely on a stable API. All they need to do is to update their Rust version, which is really the same as any other software that must update over time.
This same sentiment is also echoed by Steve Klabnik here [0], it seems.
Can't agree because updating code to utilize new shiny features frequently impacts those who use the code because of leaky abstractions. Or even deliberate breaking API changes to accommodate said new language features.
How does it impact the user? Again, this is an application, not a library, so I really don't see how, just as using some desktop application that the developer updates should not affect the end user themselves. If this indeed were a library, then sure, I'd agree, but by my inference, it does not seem so.