Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As others have noted elsewhere, this "solution" has its own problems if you are rapidly moving. Which I don't think anyone can claim facebook hasn't been doing.

So, yes, if you are able to control growth enough that you can make this happen, it is attractive. If you can't, then this leads to a version of the diamond problem in project dependencies. And is not fun.



Growth is the reason that companies should avoid what Facebook has done. The company I currently work for anticipated the scaling problems that Google later encountered with Perfoce (http://www.perforce.com/sites/default/files/still-all-one-se...) and recognized that while perforce could be scaled further by purchasing increasingly beefy hardware, ultimately you could only reliably scale so far with Perforce.

If you're not growing, then there is no problem. If you have linear growth maybe you can keep pushing it, but who plans on linear growth?

Google is already on multiple Perforce servers because of scaling, and that is not a situation that is going to improve. If you start using multiple centralized version control servers, you are going to want a build/deployment system that has a concept of packages (and package versions) anyway.

> If you can't, then this leads to a version of the diamond problem in project dependencies. And is not fun.

These sort of dependency resolution conflicts can and do happen, but far less often than you would think. Enforcing semantic versioning goes a long way (and along with it, specifying. In practice, the benifits of versioned dependencies (such as avoiding ridiculous workarounds like described by this HN comment: https://news.ycombinator.com/item?id=7649374) far outway any downsides.

You can even create a system that uses versioned packages as dependencies while using a centralized versioning system. In fact, this is probably the easiest migration strategy. Build the infrastructure that you will eventually need anyway while you are still managing on one massive repository. Then you can 'lazy eval' the migration (pulling more and more packages off the centralized system as the company grows faster and faster, avoiding version control brownouts).


I'm assuming you aren't referring to "succeed" in your first sentence. :)

It is amusing the amount of hubris our industry has. Seriously, you are talking about outsmarting two of the most successful companies out there.

I mean, could they do better? Probably. But it is hard to grok the amount of second guessing any of their decisions get.


But are they successful because of this, or despite it?


Really good question. One that I am not pretending to know the answer to.

I do feel that the main reason they are successful is large manpower. That is, competent (but not necessarily stellar) leadership can accomplish a ton with dedicated hard workers. This shouldn't be used as evidence that what they are doing is absolutely correct. But, it should be considered when what they do is called into question.

If you have/know of any studies into that, I know I would love to read them. I doubt I am alone.


If you are a large company, you can move faster if devs aren't all working on the same repository. If all your code is in one repo and one team makes a breaking change to their part of it, everyone's code fails to build. If the source code is broken up into separate packages, each team can just use versioned releases of the packages they depend on and not have to worry about fixing compatibility issues.


While there is a strong appeal to your argument, Facebook stands as a resounding counter example. As does Google.

The counter argument appears to be this. If one team checks in a change that break's another team's code, then the focus should be on getting that fixed as soon as possible.

Now, if you are in multiple repositories, it can be easy to shift that responsibility onto the repository that is now broken. Things then get triaged and tasks must be done such that getting in a potentially breaking fix may take time.

Contrasted with the simple rule of "you can't break the build" in a single repository, where the impetus is on whoever is checking in the code to make sure all use sites still work.

Granted, an "easy" solution to this is to greatly restrict any changes that could break use site code. The Kernel is a good example of this policy. Overall, I think that is a very good policy to follow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: