I thought this was a pretty useful project for embedded work, and despite it being a few years old I'd never heard of it.
The documentation is a bit... eclectic.
Tup is a build system that keeps track of what files a command accesses. So if you run make from tup, it can look at every file your makefile reads from, and re-run make if any of those files change. Not that useful, in that example, but when you have it run gcc to make each source file instead, it can automatically rebuild just the files that need to be rebuilt.
Building on top of that, they've set up a git repo with the appropriate "tupfiles" (makefiles) to build linux and enough userspace to get by. It's not so much a linux distribution as a build-system that can build an entire linux distro from source.
The big difference between this and a normal build system is that all your source files can be in the same tree as git submodules, and `tup upd` will rebuild the entire thing, only recompiling code who's source file or shared libraries have changed.
I think it could be a pretty powerful system for a lot of projects, I'm particularly interested in using it for building android images, and firmware for ereaders.
I wouldn't say a Linux distro package system is largely defined by how software is built form source, but it can be, and it at least matters in all cases.
I would say the package dependency tree is probably the most important, defining feature of the package system. Second to that is delivery (and the fact it's so high on the list is a testament to its importance, as I can remember a time before it when we just had RPM and dpkg). Some package managers also build form source, but that's a quirk of hose those operate, not really a core requirement to delivering binaries to the end user.
Even FreeBSD's ports build system is largely defined be its dependency tree and the rules they have in place to make fetching and building easy. Whether binaries are delivered from it or source that's compiled locally is somewhat transparent to the end user, other than the extra time it takes.
I think GP was saying that even the Linux package managers that deliver binaries tend to build their own packages with a distro-specific build system rather than just repackaging upstream binaries
Ah, yes. It's entirely possibly I missed the point. :)
Although repackaging upstream can be a fuzzy concept. E.g. Ubuntu's early days. They've always repackaged Debian packages, but I believe there was less differentiation in the beginning, before Ubuntu has many of its own initiatives and projects.
The big difference being that in order to get any real benefit you can't use upstream's build system, you need to use tup. I'd say it's not a linux distro yet.
I would say that’s definitely a huge benefit to the end users (though not the maintainers, that’s for sure) as “one build system to rule them all” is pretty much the dream, right?
Presuming tup is good, of course. I played with it but I hated the fuse dependency and ended up switching to meson (though I despise the build-time python dependency, see [0]) for cross-platform projects and plain jane gmake or pmake Makefiles for others. I also hand code ninja build files for stuff that’s not too crazy and platform dependent, basically for things that gulp would normally be in control of.
I really wanted to like tup, but I feel like it's just missing to many 'basic' things, along with being a bit annoying to setup and use. The big thing for me is that there is no `tup clean`, and also no way to create phony targets or multiple targets like with `make` - and they're not excluded because of complexity reasons, but because they just don't think it should be the job of the build system [0]. Which, they might be right, but if I can't actually replace my full Makefile's with Tupfiles, but instead have to replace it with Tupfiles + Makefile/shellscripts/installscripts/etc. then Tup is a lot less appealing to me. This is especially an issue since the scripts have no way of accessing the Tup information.
With that said, I was curious how this project got around the 'install' problem, but it seems like they were largely able to just side-step the issue. They build an `initrd` from tup, and the `initrd` Tupfile copies all of the compiled executables into their proper folders before generating the `initrd`. So the projects themselves can't install themselves directly onto a system.
I agree with you, it's not there yet. It's the same problem over and over again with all these Make alternatives: close but if only xxx or if only not yyy.
I agree that the installation approach is a total copout, I would have hardlinked all the files from their installation destination to the build artifacts. (presuming state is intended to be thrown away on rebuild.)
I think what bugs me the most is that `tup` has been around for years now (The v0.1 release was in 2009), and it is almost exactly what I want - it keeps a very similar setup to `make`, but much better support for the stuff `make` makes a bit painful to do. If it had support for `tup clean` and for defining phony targets like `install` I would likely be using Tupfiles for all of my projects.
It's extremely unlikely that Tup provides a "clean" command. They've expressed a philosophical opposition to it on a few occasions: https://github.com/gittup/tup/issues/120
They're make a really clear distinction between a build system, and a build support system. The "clean" command falls into build support, and the Tup team would argue this is better served with techniques like "git clean -xdf"
The "make clean" command is not perfect. Even if someone writes this make phase correctly, it may break after you sync new makefile changes on a dirty project dir. What do you do in that case?
I understand it's not going to happen - I linked one such conversation in my first comment. I think the reasoning is silly (His definitions might be "right", but he's the only one using those definitions), but I'm not arguing with them about it.
And I agree that `make clean` isn't perfect, I think that is what makes the fact that `tup clean` doesn't exist even more annoying. A lot of people recommend including an auxiliary Makefile or script to do the cleaning, but it can never be as good as if `tup` just did it itself since it is aware of exactly what files it as created during the build process.
so silly this isn't included because of philosophical objections.
tup is in some ways brilliant, and i give it kudos for actually improving on make in many ways (unlike 99% of the make-replacement-hopefuls), but man oh man do i value and prefer software devs with a more user-focused philosophy.
(an excellent example of that would be homebrew, which arguably wasn't anything special from a pure-tech perspective, but had amazingly good product sense/user focus. imagine combining that attitude & ux skills w/ the technical brilliance of tup...)
For what use case do you need to perform a 'clean'?
Every time I have needed to clean a Makefile based project it has been due to imperfections of the build system, i.e. needing to do a full rebuild. In theory (I've never used tup in anger) you should never need to do that with tup because the dependecy gathering is so robust.
maybe you want to time a rebuild, maybe somehow something outside of tup's knowledge changed, maybe you want to tar the directory w/o build artifacts, maybe you simply need to quickly free up some hd space, &c &c
The documentation is a bit... eclectic.
Tup is a build system that keeps track of what files a command accesses. So if you run make from tup, it can look at every file your makefile reads from, and re-run make if any of those files change. Not that useful, in that example, but when you have it run gcc to make each source file instead, it can automatically rebuild just the files that need to be rebuilt.
Building on top of that, they've set up a git repo with the appropriate "tupfiles" (makefiles) to build linux and enough userspace to get by. It's not so much a linux distribution as a build-system that can build an entire linux distro from source.
The big difference between this and a normal build system is that all your source files can be in the same tree as git submodules, and `tup upd` will rebuild the entire thing, only recompiling code who's source file or shared libraries have changed.
I think it could be a pretty powerful system for a lot of projects, I'm particularly interested in using it for building android images, and firmware for ereaders.