Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These requirements don't come out of nowhere. Normally they come from:

1. CEOs/whoever that don't listen to how much additional complexity it is to build a system with extremely high uptime and demand it anyway.

2. Developers with past experience that systems going down means they get called in the middle of the night.

3. Industry expectations. Even if you're a small finance company where all your clients are 9-5 and you could go down for hours without any adverse impacts, regulators will still want to see your triple redundant, automated monitoring, high uptime, geographically distributed, tested fault tolerant systems. Clients will want to see it. Investors will check for it when they do due diligence.

Look at how developers build things for their own personal projects and you'll see that quite often they're just held together with duct tape running on a single DO instance. The difference is, if something goes wrong, nobody is going to be breathing down their neck about it and nobody is getting fired.



If the additional complexity is just "Use this premade thing" and it only adds a half hour here and there of work, while also giving you essentially a premade and pre-documented workflow than new people will instantly know(Whatever your "bloated" tool tells you to do), then it might be a net win anyway.

If the extra complexity is microservices and containers you might have an issue, but microservices are kind of a UNIX philosophy derivative, I'm not sure the complexity is really intentionally added(Like when someone uses an SPA framework or something), it just kind of shows up by itself when you pile on thousands of separate simple things without really realizing the big picture is a nightmare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: