Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a few things that stood out to me from that codebase:

1. No unit tests. Integration tests broken weekly when external data source would change.

2. Hand rolled ORM resulting in inconsistent separation of concerns. Some controllers would use the ORM classes directly. Some would add layers of indirection. Some would make database calls directly in the indirection layers.

3. Database data model would "compress" dimensions to be clever. ex: the id field is a concatenation of a user supplied string + timestamp + some hard coded string prefix. In addition, there's multiple columns which represent similar concepts like "tenant", "customer", "team".

4. Several ongoing migrations created necessary but hard to understand backwards compatibility logic. Code breaks in strange ways when trying to add features because you have to remember there's 2^n different code paths.

5. No async code. Everything was a blocking call to the database resulting in unnecessarily slow api responses.

6. No indexes in the database to improve db perf

The managers didn't see this stuff. They just know features can take a while to get out the door so they respond by asking for more head count. Leadership sees that more backend devs are needed and hire more backend focused managers to try and manage the fact that there are scaling and perf issues.



Does not seem like typical mistakes backend developers would make. Perhaps it is rather, that they moved on into leadership, since they found that to be their more effective roles, rather than their output as backend developers? Kind of like admitting, that perhaps it was not meant for them? With these kinds of practices, I could imagine that.


Isn’t it good for tests to fail when things change? Are you saying that the data source is not abstracted properly like with a repository pattern?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: