In terms of engineering tradeoffs, this reminds me of a recent talk by Alan Kay where he says that to build the software of the future, you have to pay extra to get the hardware of the future today. [1] Joel Spolsky called it "throwing money at the problem" when, five years ago he got SSD's for everybody at Fog Creek just to deal with a slow build. [2]
I don't use Facebook, and I'm not suggesting that they're building the software of the future. But surely someone there is smart enough to know that, for this decision, time is on their side.
You should also watch his talk where he questions if it is really necessary to build large software products from such large codebases that, if printed out, would stack as high as a skyscraper. And, then talks about what he is doing to demonstrate that it is not :)
Facebook tends to throw engineer time at the problem, though. I know one Facebook DevCon I went to they presented how they completely wrote their own build system because Ant was too slow for them.
FYI phabricator originated at Facebook, but the guys who wrote it left and founded Phalicity, who supports it full time now. Facebook did the world a service by open sourcing phabricator.
Review board and Gerrit are both awful in comparison
If you are at the point where you are contemplating building your own dev-tool-everythings... then you've gone very wrong. Let the dev-tool guys make dev-tools, they are much better at it (it's all they do). Instead figure out how to make them work in your business environment. Facebook may have one of the largest sites on the web, and perhaps the most users... but their codebase itself is nothing special and does not warrant "special" tools. That's just BS and a measuring contest.
Having used ant as a build system for Android projects, I don't blame them.
In my admittedly limited experience (Windows 7 x64, ant, Android SDK) ant is terribly slow to build projects with multiple source library dependencies and throwing hardware at the problem doesn't speed it up that much.
I don't see how this is an Ant-specific issue. Ant is just calling into javac with a classpath parameter. The actual execution time spent in Ant should be minimal.
For example, most open source library projects that you include in a project don't change from build to build, but ant dutifully recompiles them each time instead of caching the output until the files in that project are changed or I manually clean the build output.
"Only worse" is probably not fair. Though, is it really surprising that a newer make file degenerates into the same problems as old ones?
Which will then lead down a path of a set of scripts/utilities on top of said system to standardize on a set of targets and deal with known issues. And suddenly we have reinvented another old tool, autotools. We'll probably find ourselves in their troubles soon enough.
There was a link submitted here (I can't find it now) a few weeks ago that talked exactly about that. Most build systems are just reimplementations of make, which makes them worse , because make has been battle tested for ages.
Amusingly, that story is what convinced me to finally read up on autotools. It has been interesting to see just how much they already covered.
In fact, the only thing the autotools don't do, that every current tool does, is download the dependencies. Which, is not surprising, since fast network connections are a relatively new thing.
And I'm actually not sure I care for the download of dependencies thing. It is nice at first, but quickly gets to people pushing another thing to download, instead of making do with what you have. This is plain embarrassing when the downloaded part offers practically nothing on top of what you had.
At my company, I wrote our own build system, because "make", "waf", "shake" and various others do not give any useful guarantee, and we've debugged cryptic under-specified dependencies way too many times. Make clean on a large repo and lack of automatic work sharing hurt too.
Also, auto detecting inputs rather than being forced to specify them is nice. Especially as virtually all input specs in Makefiles are wrong or incomplete.
Writing a build system is not such a big deal -- and outdoing the existing work is not very hard.
Perhaps that's true for your very specific use case but the same is likely to be true for other people using your build system. autodetection is great when it works and horrid when it fails.
I use file system hooks to autodetect dependencies, so it should always work as long as the file system is the input of the build and not other channels.
Explicit input specifications are virtually never correct. #include scanners for example, are generally wrong because they do not express the dependency on the inexistence of the headers in previous include paths.
I don't use Facebook, and I'm not suggesting that they're building the software of the future. But surely someone there is smart enough to know that, for this decision, time is on their side.
[1] https://news.ycombinator.com/item?id=7538063
[2] http://www.joelonsoftware.com/items/2009/03/27.html