Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In terms of engineering tradeoffs, this reminds me of a recent talk by Alan Kay where he says that to build the software of the future, you have to pay extra to get the hardware of the future today. [1] Joel Spolsky called it "throwing money at the problem" when, five years ago he got SSD's for everybody at Fog Creek just to deal with a slow build. [2]

I don't use Facebook, and I'm not suggesting that they're building the software of the future. But surely someone there is smart enough to know that, for this decision, time is on their side.

[1] https://news.ycombinator.com/item?id=7538063

[2] http://www.joelonsoftware.com/items/2009/03/27.html



You should also watch his talk where he questions if it is really necessary to build large software products from such large codebases that, if printed out, would stack as high as a skyscraper. And, then talks about what he is doing to demonstrate that it is not :)

https://news.ycombinator.com/item?id=7538073


Facebook tends to throw engineer time at the problem, though. I know one Facebook DevCon I went to they presented how they completely wrote their own build system because Ant was too slow for them.


They built their own build system because once you are dealing with top engineers NIH sets in quickly and you write your own everything.


They have their own in-house version of just about every dev tool. http://phabricator.org/


FYI phabricator originated at Facebook, but the guys who wrote it left and founded Phalicity, who supports it full time now. Facebook did the world a service by open sourcing phabricator.

Review board and Gerrit are both awful in comparison


I've been using Phabricator for a few months now, and I used Gerrit for over 2 years at my last job.

They each have their strengths, but both of them are infinitely preferable to not doing code review. Neither is awful.


This might be right in spirit, but Phabricator is maintained by Phacility (some ex-FB people) and used by at least 6 other companies outside of FB.


That's one way to keep your talent from being poached :)


To be fair once you are over 1000 engineers, a 1% improvement in productivity is worth a lot of development time.


Somehow Netflix never got that memo and they seem to be doing just fine.


If you are at the point where you are contemplating building your own dev-tool-everythings... then you've gone very wrong. Let the dev-tool guys make dev-tools, they are much better at it (it's all they do). Instead figure out how to make them work in your business environment. Facebook may have one of the largest sites on the web, and perhaps the most users... but their codebase itself is nothing special and does not warrant "special" tools. That's just BS and a measuring contest.


Having used ant as a build system for Android projects, I don't blame them.

In my admittedly limited experience (Windows 7 x64, ant, Android SDK) ant is terribly slow to build projects with multiple source library dependencies and throwing hardware at the problem doesn't speed it up that much.


I don't see how this is an Ant-specific issue. Ant is just calling into javac with a classpath parameter. The actual execution time spent in Ant should be minimal.


With Android, ant makes naive assumptions.

For example, most open source library projects that you include in a project don't change from build to build, but ant dutifully recompiles them each time instead of caching the output until the files in that project are changed or I manually clean the build output.


That surely means Ant is make, only worse.


"Only worse" is probably not fair. Though, is it really surprising that a newer make file degenerates into the same problems as old ones?

Which will then lead down a path of a set of scripts/utilities on top of said system to standardize on a set of targets and deal with known issues. And suddenly we have reinvented another old tool, autotools. We'll probably find ourselves in their troubles soon enough.


There was a link submitted here (I can't find it now) a few weeks ago that talked exactly about that. Most build systems are just reimplementations of make, which makes them worse , because make has been battle tested for ages.


Amusingly, that story is what convinced me to finally read up on autotools. It has been interesting to see just how much they already covered.

In fact, the only thing the autotools don't do, that every current tool does, is download the dependencies. Which, is not surprising, since fast network connections are a relatively new thing.

And I'm actually not sure I care for the download of dependencies thing. It is nice at first, but quickly gets to people pushing another thing to download, instead of making do with what you have. This is plain embarrassing when the downloaded part offers practically nothing on top of what you had.


I'm confused. That doesn't sound like Make's behavior at all.

Make, when used properly, is still an pretty smart tool.


That's the worse part of Ant.


Doesn't and use prebuild jars for libraries? If not then maybe you should think about switching to maven.


At my company, I wrote our own build system, because "make", "waf", "shake" and various others do not give any useful guarantee, and we've debugged cryptic under-specified dependencies way too many times. Make clean on a large repo and lack of automatic work sharing hurt too.

Also, auto detecting inputs rather than being forced to specify them is nice. Especially as virtually all input specs in Makefiles are wrong or incomplete.

Writing a build system is not such a big deal -- and outdoing the existing work is not very hard.


Perhaps that's true for your very specific use case but the same is likely to be true for other people using your build system. autodetection is great when it works and horrid when it fails.


I use file system hooks to autodetect dependencies, so it should always work as long as the file system is the input of the build and not other channels.

Explicit input specifications are virtually never correct. #include scanners for example, are generally wrong because they do not express the dependency on the inexistence of the headers in previous include paths.


What guarantees did you find lacking in shake?


when you're scaling any given variable, the stock solution is almost never good enough. you end up hitting all kinds of limits.


That quote makes me nostalgic for Silicon Graphics workstations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: