Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How terrible code gets written by sane people (chrismm.com)
197 points by SarasaNews on Feb 19, 2017 | hide | past | favorite | 144 comments


I like the article but I found the implied solution a little humorous.

Just hire devs who are great coders, great communicators, stubborn enough to push back against upper management, good hearted enough to sacrificed their own KPIs to focus on the success of their project, and massochistic enough to stick it out at an obviously poorly run project.

Good luck

Side note, for 15 years of experience the project doesn't seem that bad by a long shot. I only have ~7 years of experience and have seen some far worse code. I was just recently on a project that has some methods that take 8 call backs(project was greenfielded in Jan 2016) before that I was on a project where the development team copy and paste the source for the last web page every time they needed a new one, but not delete the initialization code that fetched the data from.the database. So the 18th screen they worked on had 26,000 lines of code and took 5 minutes to load. And every button click triggered a reload. So, add a new widget and boom, wait 5 minutes for a screen refresh.


> Just hire devs who are great coders, great communicators, stubborn enough to push back against upper management, good hearted enough to sacrificed their own KPIs to focus on the success of their project, and masochistic enough to stick it out at an obviously poorly run project.

This is a pretty close description of the positions I find myself in as a dev and I don't think this combination of qualities makes me more hirable.


100% the same and I agree. getting fired and being worried that my next decision will get me fired has caused me all kinds of anxiety issues.

I recently decided that given my personality I'm not well suited to working as an employee for an employer and have gone solo. I don't want to feel like a victim to bad management or that my future and personal worth are tied to a single company as good/bad as they may be. I want my efforts properly remunerated, I want my reputation to speak for itself.

So while these traits may make you less hirable as an office worker, they may make you more hirable as an expensive contractor.


I feel the same way. I jumped ship once but didn't have enough savings to ride it out. Getting independent work without a track record was extremely difficult.

Looking back, I think most people going solo start as contractors and also diligently save at least a half year living expenses before hand.

I've begun saving but it's slow going and I'm getting impatient. I greatly enjoy the feeling of making my own destiny and having employees to care for, the idea doesn't scare me as much as it seems to bother most people.

In a position of leadership it's possible to build your own little utopia. It was a great feeling to build a bubble in a world that's usually shit, where a lucky few actually enjoyed their work and each other's company. Alas that opportunity has passed and it's nigh impossible to go from the job title developer to manager without climbing a ladder for years and brown-nosing.

Anyways, how do you recommend getting started? Like most career devs my company has me locked down with contracts so the usual advice of starting a side business isn't possible.


^^^ I reached exactly the same epiphany, if you can call it that, about 3 years ago now. It might change someday but I can't be an employee any more. I just can't. I can't tolerate the idea of having so much of my time sucked up on something that offers so relatively little in the way of return. And I cannot cede my future to management any more; I have to own responsibility for it because, honestly, there's only so much future left for me (or for any of us).


Also makes you a target for layoff rounds. The yes men are always safe. You should have your finances sorted out if you plan to stick to your principles.


It probably won't, because most of that isn't predictable (by the interviewer) at interview time. It could make you more valuable in your current job, though.


invaluable, but they won't kick me more money. I asked.


Find another employer. Then negotiate from strength.


Then, not invaluable. :-/


They fired my whole team except for me. They would rather have fired me too, I'm sure.

I get what you're saying though.


I wrote two mails yesterday which basically "push back against upper management", for a very good reason. At least that's what I think. Now I'm totally terrified about what will happen tomorrow. I have 4 kids and almost no money in the bank.


Don't 'push back'. That is the wrong terminology, and it's inherently confrontational.

Explain to your managers the range of outcomes and give them the power to make the decision with the best information you can provide them.

i.e. 'if we skip this bit, then quality will likely suffer, with these kinds of expected outcomes'.

Don't be dramatic or emotional, just try to give the best information you can.

Also have sympathy for them - money does not grow on trees - and basically balancing quality vs. time-to-market is a constant and difficult challenge.

This way - if they decided to 'rush' it - they know what the likely outcomes will be and the inherent outcomes.

If they are making the decision then the responsibility falls on them to the extent that you have provided them with thoughtful and realistic assessment.


Pretty much this. Sometimes pushing back is needed, but way more often "negotiation" is needed. It is unreasonable to expect managers to magically see into the development details, you have to explain, made plans transparent etc etc. When you do that you often (not always) find that requirements are negotiable and not equally important, e.g. it is possible to meet the deadline without sacrificing code quality. Oftentimes the compromise is possible - you wont get two week straight of refactoring, but they are ok with using 20% of development time for cleanups.

I have seen developers "push back for the right thing" in a way that basically amounted to angry emotional outburst over things manager did not understood. The dude thought he is pushing for the right thing, but everyone else thought he does not listen to their needs, refuses to follow the company vision of the product replacing it by his own. (They wanted simplest possible functionality and fast, he was constantly adding own requirements to "make it better". When the same person pushes for yet another refactoring, management does not trust him.)

The other thing to understand is that some experienced lead remembers teams that were given time, no deadlines and all the good stuff and then produced mess anyway, procrastinated and took long time to do it. I have seen that happen and I have also seen that ended in long wars over petty differences in style and opinions. Sometimes the mess is result of people doing something knew and thus bad decisions along the way. Code review alone wont solve that, because the reviewer may be the one forcing mistake on others.

You need to communicate in the way that will ensure the manager that he or she is not in the above situation.


There should be no need for 'wars' or even 'negotiation'.

It's the managers decision - not the developers, really.

The devs can lay out what can be done, and describe what the results will be if various paths are chosen.

'Skip the tests' - you get quality issues, but better schedule.

'Write perfect code' - you get quality, but it could take to long.

The old saying: 'fast' 'quality' or 'cheap' -> pick 2.

If your team is having problems because of bike-shedding over details, or messed up code - this is altogether another issue and needs to be addressed differently.


I think you're laying out what is absolutely the best approach.

If you're an employee it's sometimes hard to separate the company's interests from your own, so I sometimes do a mental exercise I call "playing consultant" - where it's purely my job to "consult".

I try to honestly describe the options, give my best recommendation and then whatever they choose is on them.


As long as you're tactful and willing to cede your argument if they don't come to see your side, there shouldn't be any reason to be terrified. If you act diplomatically and still have crazy bosses, might be good to find a different job if you can, or just not care and get your paycheck.


Unfortunately it is not that simple. Sometimes pushing back on the management is an indication of misalignment of priorities. It may not happen right away, but it can definitely limit your tenure.

I have been on both sides of this.


Surely not if you raise a point one time to get a sense of the waters. If the management doesn't like to hear opinions from subordinates, you could pick up that vibe and stop pushing back. It might require greater social and political tact than should be necessary, but no more than needed in normal life situations.

Granted, social IQ is on a scale just like analytical analytical IQ. It would be nice for engineers if we could be blunt and to the point in our communications and let rationality win, but people aren't like that.

I'm just arguing the other end of this, because I've sometimes seen engineers be abrasive while "technically correct", and then have it cause problems for them. Being pleasant in interactions is an important skill, and I feel like sometimes it gets disregarded.

Again,sometimes even with tact and diplomacy, your bosses can still be unreasonable, and that sucks.


In some companies this is respected more than people who just obey. I would potentially promote you and I know big corp execs who would too but could be that it is more common here.


The irony is that some of those qualities are probably more likely to get you fired rather than valued.


"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

- Upton Sinclair

I've experienced this firsthand. Pushing for better development tooling and fixing problems instead of pushing features gets you a pink slip. Part of the reason I run my own shop now.

It turns out the people most involved in estimating don't like it when you say that most estimates don't deliver any value.


And this is in a climate where developers can often pick and choose their jobs. Imagine if code was more commodified?


"stubborn enough to push back against upper management"

Every conversation I've ever had with management about rewriting/refactoring has been fruitless. Now if I think it's necessary I just bake it into feature/bugfixing. The work takes ~20% longer which just means that features and bug-fixes take 20% longer to deliver.

That doesn't really get pushback, whereas asking for an extra week to do some cleanup work certainly does.


It feels like people are focusing on bad conditions and unreasonable deadlines, and not the idea that in the real world smart people under execellent conditions do this very, very frequently.

I've worked for long periods of time in 3 codebases that were over 2 million loc in my career, all had great conditions, all had very smart people. Those are not huge codebases by any standard, and yet all 3 had people talking about ground up rewrites. All 3 had people complaining about every single example the author used. Two of them undertook the ground up rewrite estimating one year. Both admitted to having made a huge mistake 3-5 years later.

My theory is that all software eventually becomes difficult to maintain and full of warts, regardless of smartness, regardless of conditions. We all have code ideals we believe and talk about that don't work as well in the real world as we imagined. We all have deadlines that are shorter than we want -- it's a universal constant. We all imagine we can write code and fix big problems faster than we really can. And we all fail to fully understand what's working right in a messy large codebase and tend to focus on what's wrong.

I don't know how to fix this, but it's very possible the author ran into thoughtful paradigms he just hadn't seen before and didn't fully understand. Some styles seem messy if you haven't seen them before.

React is an intentional lack of separation of concerns between code and markup. Some projects are required to mix tabs and spaces depending on tools. For that matter, bash scripts alone require mixing tabs and spaces for some features. And so on, it's important to have context for why things are the way they are before jumping to the conclusion that it's bad.


> My theory is that all software eventually becomes difficult to maintain and full of warts, regardless of smartness, regardless of conditions.

I agree. I think there's a lot of emphasis on preventing bad code from existing. But if you believe some bad code is inevitable, then it's more important to make it easy to fix when it happens.

By default, good code tends to be modular and easy to replace, while bad code is excessively coupled and hard to get rid of. So bad code has a disproportionate impact on long-lived codebases.

I wonder if one way around this is to force modularity, even when it's unnatural. Functional programming seems to be one way of doing this, microservices are another. A related principle would be not to take DRY too seriously, favoring decoupling over deduplication.


> I agree. I think there's a lot of emphasis on preventing bad code from existing. But if you believe some bad code is inevitable, then it's more important to make it easy to fix when it happens.

Indeed, what matters isn't writing bad code, but doing so in a way that affects the whole codebase (whether through coupling or by volume). Accepting that bad code happens helps a lot in locking it down behind containment measures, both in space and time. I can liberally accept bad code as long as it doesn't imply bad architecture, so that it's easily removable and understandable, but I'm hell bent on refusing any form that means we can't dig ourselves out of a hole we've dug ourselves into (this includes slowly drifting yourself into a hellhole by accretion, bad code must come with an expiration date, which means immediately creating an issue in your favourite bug tracker upon commit/merge). Decoupling is key, and duplication is often needed to see the true generalisation patterns emerge, not the ones you expect to be there sometime in the future[0].

[0]: noop-noop-noop maneuver (video+slides) http://www.thedotpost.com/2016/10/katrina-owen-the-scandalou...


This is a great talk. I find that this pattern comes up over and over again: first add new code in a safe way that makes the codebase a bit messier without modifying anything, then move all functionality to the new code and neuter the old stuff, then delete the old stuff. Eg this style of pattern is how to do schema migrations safely.


> I wonder if one way around this is to force modularity, even when it's unnatural.

There's probably no denying this is a good idea. :) The most successful very large systems seem to be made of independent modules that each strive to do one thing well. They can also be more easily removed, refactored, or replaced, due to their small size, and perhaps most important you can do one at a time.

> I think there's a lot of emphasis on preventing bad code from existing. But if you believe some bad code is inevitable, then it's more important to make it easy to fix when it happens.

Yeah, it's true, emphasis at any given moment is on what list of things we want to see, and what list of things we don't want. The author even started with one. Problem is, code styles also change over time. What some people call reasonable or good code, other people call bad; "bad" is subjective. Hire a brilliant engineer to help make your decent application awesome, and her first reaction may be "oooh, this code is pretty bad. let's make some changes". I honestly wouldn't be able to count how many times I've seen the newest member of a project complain about the mess while nobody else seems to mind that much. (And I count myself in that list, I've done it too.)


> My theory is that all software eventually becomes difficult to maintain and full of warts, regardless of smartness, regardless of conditions.

I've come to a similar (intermediate) conclusion for now. Would certainly fit the general pattern of "all things" in this physical universe.

I think there's also a lot of subtle human psychology at play at all times in this subject. Identifying and ruminating over such "issues" implicitly elevates oneself (to peers, the community, the boss, the client, the family, to self, etc), materializes potential future employ ("this needs a total overhaul ---by me/us, according to this latest/proven/bla paradigm/methodology/platform --- or the project is doomed, I say, doomed"..), satisfies many a engineer's/programmer's inner yearning for learning to do better, inching a bit closer to perfection, lifting the whole field, etc.. our supreme tendency to fool ourselves (about ourselves and the power of one's mental and aux toolkit) on the daily that probably keeps us going so eagerly in this (frankly for mammals somewhat weird) "activity", it all comes together suspiciously neatly when one looks at others' codebases / current conditions with that "damning-professional's glance"! =)


> My theory is that all software eventually becomes difficult to maintain and full of warts, regardless of smartness, regardless of conditions.

If a code base doesn't change too much in side or original intent then architecture and design (if it was good in the first place and is continued to be followed) will probably keep it fairly maintainable.

In a lot of cases though codebases slowly grow until they become of a size that requires a different architecture or approach to organising the code, especially if the number of collaborators increases too. It's quite a hard thing to spot and then address while that codebase is still quite active.

Rewrites are tempting to be able to apply that architectural change but often you can be quite bound by the implementation specific behaviour of the original system.

It might be interesting to look how the linux kernel has changed internally as it moved from a single person project to what it has become today.


> often you can be quite bound by the implementation specific behaviour of the original system

Yes, this. Requirements accumulate over time. That is what makes it harder to refactor production code to be cleaner. When you're not bound to your requirements, they can change, but when your requirements start to set in stone, you lose the freedom to change them.

Choosing to rewrite already released code is likely to introduce regressions, is more difficult than with unreleased code because you are not allowed to change requirements, and redoes work work that was already done once. If management is paying attention, they will (and should) complain about paying for the engineering again. After all this, there are no guarantees it won't just happen again. Things don't tend to stay magically clean after rewriting. That's assuming the rewrite even finishes cleanly. What often happens is the engineers underestimate the time to rewrite because they didn't understand how well and how many things were working, and it gets cut short by management a third of the way through when the rewrite is obviously over budget. Now the codebase is messier than when it started, even though the engineers had the best of intentions and management gave them large swaths of time to try and fix things.


I've worked on teams that kept the code base clean for years.

We did it by focusing hard on just that. Ruthless refactoring, zero bug tolerance, and no deadlines are maybe the biggest factors.


Deadlines are poisson for code quality.

I'm pretty sure that much of the code I write today will still be in use ten years from now. There's no point worrying about arbitrary deadlines; in the long run it doesn't matter if the feature was done on time or a month late. What does matter is, for example, wheter all possible error conditions are handled by my code.

Fortunately I'm in a position where I can decide not to have deadlines.


Lovely typo - I am sure such code might show a "poisson" distribution :)


Sounds fishy - I don't buy it.


This is intriguing, and I'd love to hear more context about the team, code, management, and business/product. I really want it to be true and possible in general...

I'm going to admit what conclusions I jumped to reading this. I'm admitting my bias, and that I might be wrong, not arguing with you or challenging your experience.

My (biased possibly wrong) instinct is to wonder to what degree that was a real world situation. What I mean is that the only times I've had no deadlines and the ability to ruthlessly refactor are the times I've had no managers and no customers. I can see that being easy to pull off in school projects, and on research teams, and in free open source projects, but I have a hard time seeing (and haven't personally witnessed) how that works long term in a healthy functioning business that has revenue where the code is directly related to the product. There are stories of places where this stuff happens, but often you can find people who were there saying the stories are exaggerated or lacking the proper context.

So anyway -- any context you want to share that might make it easier to be ruthless and keep code clean under less ideal conditions than yours? Thoughts about how to do it when there are deadlines, since most people (I speculate) do have deadlines?


These projects were explicitly set up to be XP/Agile projects, and were staffed with people with that experience and expertise.

The way the experts say you can turn your environment around to that, is by explaining and showing the advantages of a deadline free process.

Your management will - not unreasonably - suspect you're just trying to work less hard. The way out of that is to build trust by constantly delivering quality software.

If this sounds hard, it probably is. I've certainly never done it. The easier way is to join a team that already works this way :)

It's also worth noting that constant refactoring towards a good design is a difficult skill that not everyone has! It takes a effort and experience to get good at it. For a group of random programmers, it may not be the best way to work.


Thanks! Yes, it's hard to do well even under ideal conditions. That fact is what makes me think that usually my biggest obstacle is probably me, even when I'm certain my deadlines are holding me back, and what in turn causes me to project that onto others who complain about deadlines and management.


I agree with you.

I think the best we can do as software engineers working on legacy code bases is stop making assumptions about authors of the code and do not think that we are smarter and could do better.

Legacy code is not just a text. It's a big amount decisions made for a reason. What we lack is a history of those changes. So if we see something we call "anti-pattern" or "code-smell" we need to think twice is it really the case or we are missing something.

Inexperienced programmers tend to simplify things and swear at bad code. They think that they could design this code better and assume million of things about the code and its authors. I think it's unprofessional.

We need to avoid making assumptions if we have no real evidence. Be cold-minded. Refactor and improve step by step. And don't rush to rewrite huge code bases from scratch: are we sure we won't end up with the same code mess and we are smarter than authors of legacy code?


Where does bash require a space in place of a tab? I've got 1000s of lines of scripts that start with #!/bin/bash and 100% use tab to indent.


Indented here-strings.


Nope, those require tabs. See first result on Google for "bash indented here string" as well as bash guide.


Yes they do. You must indent the here-string with tabs and not spaces (which sucks for people who prefer spaces). If you want to indent within the here-string, bash requires spaces and not tabs. Mixed spaces and tabs are required to get both kinds of indentation at the same time. A very common use case for this would be help text / usage strings.


Agreed, but I would phrase it slightly differently.

Moltke the Elder (https://en.wikipedia.org/wiki/Helmuth_von_Moltke_the_Elder) wrote that to understand military strategy you have to understand two basic things:

- No plan survives contact with the enemy

- Strategy is a system of expedients

In other words, you can make all the plans you want, but as soon as you put them into action they have to take into account the behavior of the enemy, which may or may not go as you predicted. And once events start departing from your predictions, gaps in your plan will appear that you will have to plug with whatever you have at hand, because once shots have been fired you can't un-declare war and start over with a new plan that incorporates what you've learned. You're hip-deep in the muck now, and have to struggle through to the other side as best you can.

Something similar could be said about designing software. If no military plan survives contact with the enemy, no software architecture survives contact with actual users.

The architecture of a particular piece of software is, at root, a hypothesis: given problem X, here is how one could go about applying a defined set of computing resources to solve it. And at the beginning those architectures are always clean, because they're being applied at a purely theoretical level where things we don't really understand about the problem aren't evident yet. And since it's all theoretical, there's no warts; re-drawing the architecture on the whiteboard doesn't inconvenience anyone, so we can do it boldly and often.

But at some point you have to translate that beautiful architecture into working software and put it in front of real people, and that's where the problems start. Because those real people will use the software in ways that surface facets of the problem you didn't appreciate, forcing you to modify it to keep up. And because now making changes means inconveniencing real people and losing actual money instead of just scrubbing off a corner of a whiteboard, those changes will have to be conservative and expedient rather than bold and sweeping. And this is where the warts start creeping in, as you try to drag your original vision into some form that actually fits the real world as quickly and cheaply and non-disruptively as you can.

If the architecture is the hypothesis, the software is the experiment.

And you may think, after running the experiment once, that if you could just start over with a clean sheet of paper armed with what you know now, this time you'd "get it right". But of course the real world isn't static, so by the time you develop a new hypothesis and are ready to run the experiment again, you often find the ground has moved out from under you. Your hypotheses are chasing a moving target, and so the need to patch them up with duct tape and bailing wire never ends.


I'd heard the quote about no plan surviving contact with the enemy but never bothered to look up the source. That is a great analogy, and I think you have restated a nice chunk of my working theory better than I did. Especially that an architecture is a hypothesis.

Under this framework, maybe I can reasonably suggest that software has at least 3 big enemies: customers, management, and the coders. We are one of the enemies, for lots of reasons, but in part because coding style fashion trends change quickly. One year everyone's using OO, classes with templates and separating their code and markup, the next year everyone's using composition instead of inheritance and smooshing their code and markup together and talking about how much better it is than the old ugly way.

People who didn't join a project at it's beginning tend to both complain about the bad practices disproportionately, and at the same time contribute to introducing new styles and making the codebase less consistent disproportionately. After a few of those, there's just no wonder that things start to look messy in any sizeable project.


I work on a lot of projects like this. The non-techies running the project are usually clueless and programmers are out of touch with modern best practices. The funny thing is that it makes me look slow for not finishing tasks. People in charge want to see features banged out. The guys who write hundred-line if-defs or copy/paste code from one file to another, get to go home at 5pm and live their life. Someone else picks up the tab later. When they need some info from the db, they just add an ajax call and some php function to fetch that and use jQuery to change it on page. I don't even try to refactor this stuff, just breaks everything, so I go along. I won't even say anything about cleaning it up, because it will be assigned to me, while the other guys pile on spaghetti code like there is no tomorrow, good luck merging that.


I sympathize with this, but perhaps it is really more economical to write software like this in your company's case (?)


It appears that way short term, but long term, having a bad codebase can retard a company's growth through lack of scalability and maintainability (read easily add/remove/change features). Companies spend a hell of a lot of money for growth. If they don't know how to manage software development, then the software will nullify all that cost spend on growing, because the system can't handle it.

I am truly amazed at how bad many US businesses are at managing their software systems. It's as if they are considered an afterthought rather than the department on which every other department depends. It's simply old fashioned thinking. CEOs are typically older, and the current crop were trained before computers really proliferated business.


> It's simply old fashioned thinking. CEOs are typically older, and the current crop were trained before computers really proliferated business.

It's really not this at all. It's not a generational thing and it's not an old-fashioned thing. The ultimate rule of workplace dynamics is whether your position is seen as a cost center or a profit center. In software, most often your job is treated as a cost center.

My company is the third tech company from a repeat founder who is younger than me (he's 29) and was previously acquired by a huge tech company. He absolutely sees the work as I do as commodity and doesn't understand what I do or what its value is. I've built rock-solid, scalable systems, envisioned and built tools that will bring us more (and better clients) and saved my company hundreds of thousands of dollars in a year. If I don't fight for the projects I want to do and push back against bad ideas, I get tasked with essentially pointless "tech janitor" work.

I've encountered non-technical executives well into their 60s who deeply understand the value of tech. You just have to find people to work for that aren't idiots.


> The ultimate rule of workplace dynamics is whether your position is seen as a cost center or a profit center.

You are right and this should be one of your top check boxes when looking at a potential employer.


> It appears that way short term, but long term, having a bad codebase can retard a company's growth through lack of scalability and maintainability

Still, it depends on the specifics. Many, many projects get redone/canceled/completely respecified after first contact with market / customer. Many are also canceled for reasons that are independent of the progress speed. For all these projects, speed-to-market has value, and long-term-maintainability has absolutely zero.

The horror arrives when, in a successful project, you realize that you are going to live with the hastily, horribly, developed project for some time to come. But these might be the exceptions - depending on how you count, some stats claim 80% of software projects get cancelled before completion (Have no idea how they define that)

It's a balancing act.


>Still, it depends on the specifics.

I agree, the devil is in the details, but intentionally writing hastily thought out software is rarely a good option. Even for a POC or MVP, code is expected to be churned out fast, only to be heavily refactored or rewritten once proven successful, but that never seems to get budgeted and people end up building on top of that.

I always go back to the KISS, DRY and YAGNI principles. They not only save overall development time, even in the short term, but they typically lead to maintainable, scalable, and expandable software.


> I am truly amazed at how bad many US businesses are at managing their software systems.

I'm guessing you'd be equally amazed at how many people are embracing rigorous software engineering principles, and not ever reaching the point of getting a viable business off the ground.

So, perhaps the best way to go about it is to sell "prototypes" at first, and then when the business catches on, hire some CS people to manage the mess.


>I'm guessing you'd be equally amazed at how many people are embracing rigorous software engineering principles, and not ever reaching the point of getting a viable business off the ground.

Yes. It's very strange the process business impose on software teams when the business is really just stabbing in the dark. I think it's because they don't realize software development is really an art rather than a science.

>So, perhaps the best way to go about it is to sell "prototypes" at first

Yes, I think that is a good method. Really everything is a prototype from a mile high view. Word 2.0 from today's view is certainly a prototype. A sellable running prototype.

In my experience, business process means almost nothing in software development, it's the people. Good people are expensive but you will most likely fail without at least a few of them.


What I've learned is that most companies that are really not in it for the long haul and just want to exit will pass the technical debt off to a much larger company that can supposedly shoulder the burdens of technical debt. The M&A process at large companies really don't look at how sustainable the codebase or infrastructure is - they only look at regulatory liabilities like super bad security practices and that's only if you're in a strongly regulated industry.

So, the current incentives mean "bang out code super fast, get rich, and someone else will figure it out." This attitude is a huge part of how so many bad acquisitions seem to be happening for the past maybe 10 years by various large technology companies as the VC owned market has grown so much compared to IPOed companies. Companies like HP, Yahoo, Dell, IBM, etc. are all in varying states of decomposition. Yes, the major tech giants are doing just fine but their M&A approach seems to be substantially different and they try to take on smaller companies and grow them before they have too much inertia keeping them from being adaptable.

Enterprise integration is a really hard problem that at this point is almost entirely impossible to approach algorithmically or from a technical solution. Sadly, you're not about to impress anyone in a technical interview how you managed to get a company's horrendous codebase to cleanly integrate with a big behemoth ESB - that doesn't signal anything about your ability to code or work with others evidently. Worse for the acquirer, the refactoring and painstaking tasks of integration are usually done with an army of hired guns that are very expensive and the work is non-scalable due to being entirely business-specific. In the process leading to the acquisition, many execs burn through their technical staff or their cap tables are messy and they wind up giving little to the engineers. The resentment alone causes a mass exodus and those most knowledgeable about the codebase depart while the execs have solidly backed their exits legally to get plenty of compensation while marking off a successful exit for future investors to look at as a positive signal for investment. Golden parachutes and different kinds of leashes hardly help until the next company trying to do the very same thing comes calling renewing the cycle of rewarding throwaway technology and IP. Then again in enterprise, 90% of what's being bought are patents and customer bases that have high switching costs with some vague notion of "alignment" with some marketecture diagram made by someone that hasn't touched or seen anything besides a sales demo in decades, so perhaps perception and suspension of disbelief is all that matters.

In many respects, I view a lot of codebases out there as evidence of the tragedy of the commons - it is a side effect of ignored externalities by every actor. There are so few incentives put into the market to make the cost of software maintenance lower it's mind-boggling how technology companies can stay in business.


Thanks, that was an insightful reply. It's like when the banks pushed off mortgage risk to the public market before the great recession. Companies accrue massive technical debt (risk), but push it on the buying company who either isn't competent enough to DD the software, or simply doesn't care. In the end, someone has to pay for that negligence, but it's like playing hot potato or musical chairs.


In hindsight, I think that the M&A process of companies is actually correctly aligned with their true costs. For most big enterprise "tech" companies, their biggest operating expenses aren't technologists at all - it's sales commissions (stock options are as a rule terrible for engineers at every old hat tech company). So instead of paying $4.2MM to acquire a customer or two, you acquire a tech start-up that already has the customers and the product people are mostly cogs - the technology itself is an afterthought. For the few companies where engineers are compensated like the sales folks in enterprise tech (about $300k+ up) it is now cheaper to acquire technology faster than to pay for engineers in-house to develop it - market fit is not a big deal because the growth model is easy to scale with minimal sales staffing costs (a luxury in business through and through).

As for the question of whom pays for the negligence of M&As in the tech sector, it's mostly shareholders rather than the US taxpayer at least. With HP, IBM, and others laying off employees faster than Macy's and Sears the negative outlook is baked into Wall Street's prognosis of increasingly lowered expectations.

Myself, I just wish I could slightly tweak index funds to exclude specific tech companies I know are complete garbage long-term (similar to cable unbundling trends). I know Vanguard probably won't do it for me but maybe the transaction costs will be low enough that excluding the junk companies that literally only exist on an index for being big and being a market leader is a net win.


Get short positions individually


Maybe. Usually I'm hired to help out later in the project, when the original developers can't handle the ever increasing number of bugs and edge-cases.


This. You get hired, you look at the code, you point out where it sucks, the team hates you, they gripe about you to management...

It's tricky managing developers. They are smart and defensive.


Focusing on poor metrics such as “issues closed” or “commits per day”

I once worked at a company that used an offshore company to work on certain modules of a large project. They would commit code that my team would have to then code review. We'd see things like large IF-ELSE blocks with minor differences among the conditions (sometimes just one char). I know everyone hates this.

Turns out their internal metric was lines of code per day, so they'd bloat the shit out of everything.


There's an old quote I read and I keep it with me. It was for CEOs and it says, "You get what you incentivize." The hardest part of managing a group of people is incentivizing exactly what you want, yet so many people don't spend an ounce of thought tuning that properly. There's other people who believe process will fix everything, yet don't bother tuning their process.

Many companies have fallen because the CEOs incentivize net without incintivizing quality, i.e. gut the company, get your bonus and get out.


Code should be reviewed, preferably by a different group of people who wrote the code, metrics are only useful for people to manage themselves. Upon review there should be immediate feedback to people who wrote it and if they continue to make the same errors, then you should eventually get rid of the person who wrote it. Making up stupid systems of control to "incentivize" people as a method of management is the stupidest thing rationalists ever cooked up. It destroys internal motivation and discipline.


That's what I had to do. My manager told me of this metric (set by the higher ups in a contract with the offshore company) along with "there's nothing we can do." I essentially became the code repository sheriff. Eventually I was spending too much of my time refactoring their code and not enough on my own. Then deadline creep set in for me and I had to just let it go.


>is the stupidest thing rationalists ever cooked up.

Oh, you don't like bonuses then? Unsolicited, forced code review doesn't have much value that I've found, unless your team is fairly junior. Senior guys know what good code looks like, even in a crunch. If you hold forced, unsolicited code reviews with senior developers, you are really just throwing away money and aggravating people.

It's just another half thought out process. A better incentive, in my opinion is: if you release a complete codebase with 0 medium and above defects by X date, you get a free week off, or something of significant value. A free, company branded desk clock doesn't cut it. (that's happened to me before).

If a company is trying to save money by hiring the cheapest offshore developers they can, they are still really just throwing away money and aggravating people. lol. That is a perfect example of a poor incentive: cut costs without any regard to the resulting cuts in quality. Anyone can make a turd cheaply, but that's rarely what businesses really want.


A reviewer soon understands that senior devs are producing good code and stops investing time on reviewing their code. If they are doing their job properly. But again it comes down to internal motivation and self management on their part.


Yes, sorry I should have been more explicit. Initial reviews have value (is this guy full of beans or does he know what he's doing), but constant reviews of senior developer code do not. If you've hired anyone you have to constantly do reviews for, you've probably hired the wrong person. I mean they should get what you are looking for after a month or two.


I know exactly zero developers senior enough to have gotten over writing bugs. I know I've written bugs that happened to pass all my tests only to blow up in somebody else's face, sometimes after surviving years of production use.

Code review would've at least had a chance to catch them early—so long as you treat it as critical analysis of program logic and not just a screen for generic goodness.

IMO if your code reviews are glorified manual style checking, you're doing it wrong.


>I know exactly zero developers senior enough to have gotten over writing bugs.

No one was arguing that. Code reviews aren't for bug discovery, though that can sometimes happen. It's a very inefficient way to discover bugs.

Developers should test their code for bugs before completing the task and submitting to QA. QA is a more thorough than development bug / unit testing and includes integration. Betas (if applicable) are the final source of pre-release bug discovery. Spending a bunch of time on code reviews to find bugs.

IMO code reviews should do two things:

1. Make sure the developer knows how to develop and is not being sloppy and not following whatever standards are set (comments, etc). They are essentially training wheels for new developers to the organization and junior developers.

2. Make sure the code represents what the developer thinks it does (sanity check).


You do need to make sure the changes follow the "philosophy" of whatever component is being changed. Otherwise you get things like duplicated functionality or APIs that don't have any cohesion. If the Senior engineer frequently works on this specific piece then its not an issue, but no matter how senior someone new to an area will not produce optimal code.


what about if you reverse the incentives...

Product Managers get bonuses related inversely to hours of downtime (caused by bugs).

Engineers get bonuses related to number of features pushed out.

In theory, this would encourage engineers to ship as fast as possible whilst encouraging PMs to ensure quality over quantity wrt feature scope/volume. This way you'd have engineers begging to add features and PMs begging for tests.

What am I missing - how can this be gamed?


What you cause is stress and tension between the two groups. They start to hate each other and blame each other. "That's not a bug - that feature was never in the spec"


The PMs will have an incentive to shift the blame around ("oh it worked fine when we tested it, it must be the fault of the database, not our bug"), while you'll only get "10x" engineers who turn your codebase into a flaming turd that's unmaintainable over the long term.


How do you check up on that reviewer? In you system, he has power to fire people, decide all standards+code style+architecture by himself and force that upon everyone else. All that with no accountability nor responsibility.


At our company we focus on intrinsic rewards rather than extrinsic rewards. We want people to feel good about the work they do, giving them the space and time to write quality code. We don't track any metrics such as commits or loc, heck we don't even track hours worked. We are much more interested in everyone, especially coders, focusing on what value their work brings to the end users. And how that syncs with the company's vision. During code reviews one of my favourite things to see is deleted code!


You don't track hours worked? Do you have more or less fixed working hours? Otherwise how do you stop people staying a long time to impress the boss?



Honestly, I think the best developers are not the ones that write beautiful code and put quality above everything else. The best ones are the ones that can push out a solution given too little time and given a (maybe self inflicted) bad code base. Because that's real life, and not the pony farm. Money trumps everything else in capitalism. Getting money means paying your bills today. And even more so than skill, quality and code getting money requires kowtowing and overpromising to someone who currently has money.

Second comes qualifying for the money (yes, in real world you get money without having what you are selling, but that's a short-term success). That means getting to ask for money again tomorrow. That is where skill and code come into play. You promised something and now when you are lucky you can deliver 80% of that. Push hard to get there. Not everybody who tries will achieve that much.

And only then comes quality, which means qualifying for money next year becomes cheaper.

If you realize LIFE is that way, suddenly high quality isn't all that important anymore, is it? Quality is a luxury and you must have achieved A LOT to be in a position to just think about quality. And the ohter people (management, sales) are not total idiots. If you dream the high quality dream they are the people who pay your bills.

I am an engineer like you, btw. I was just in the situation already, where I had to pay my bills myself and therefore know what amount of humiliation and sweat is required to get money just for ONE person, me.


I mostly agree with your top line. Code "beauty" and "quality" are not objective measures, and there are plenty of devs out there who are obsessed with their own interpretation of those ideals to the point that it inhibits their ability to ship.

However there is an existential danger for management to embrace this philosophy. The problem is the incentives are already naturally aligned to ship today, and problems discovered tomorrow are likely to fall on the shoulders of someone other than the original author. If there is no institutional value towards maintainability, then the code base will get worse and worse until it grows beyond the cognitive ability of anyone to ship anything without negative ROI. By the time that happens the cost to fix may be more than the company can afford.


The funny thing though is that tomorrow will resolve itself by itself. If the next guy finds the code too unworkable it will get replaced or the company dies out. Either event is not really a problem. Somewhere money is found to pay the replacement, people find new jobs, new companies take up space opened by other companies dying.

Everything else comes from the need to pay the bills. What does your landlord say if you don't pay your rent? What do you say if your boss doesn't pay you? It's not a management philosophy it's people trying to pay you and themselves. And even that may fail one day. You're probably doing the same, when you are like most people going to work every day despite having other quality ways to spend your time, e.g., reading a great book.


Sure, in the metaphysical sense does anything really matter?

But as a business owner, you don't want your business to die and be replaced by someone else's business due to technical debt. Just because it's hard to quantify the cost of technical debt doesn't mean it isn't real and we should just throw up our hands in defeat.

And lest you think I'm some sort of artisanal code hipster, I spent years building freelance custom web apps in the $1k-$20k budget range, so I know how to fucking ship. Also, I've been the tech lead on a Rails monolith powering a 7-figure revenue startup for 10 years, taking it from Rails 1.2 through Rails 4.2 one version at a time.

Knowing how code rot will happen and how business logic will change is impossible to be sure about, but having some instincts can save a ton of money down the line. I've received thank you notes for architectural decisions and commit messages I wrote years ago and had long since forgotten. Not all businesses will be able to recognize that such value even exists, and the ones that don't are likely to be a shit show that will never be able to attract an upper echelon of developers.

The right attitude and experience about code quality can be a huge competitive advantage.


> Money trumps everything else in capitalism. Getting money means paying your bills today. And even more so than skill, quality and code getting money requires kowtowing and overpromising to someone who currently has money.

That has little to do with what makes development and a lot more to do with capitalism, business, and markets. What this really means is that current business environments don't necessarily value development, but that doesn't change the definition of development. At this point, you're not discussing a good developer, but a person who can hybridize development and business sense. It's a different skillset, perhaps one that's somewhat contradictory to high quality development, even.

There are situations in which there is no money at all, yet you absolutely need good development skills and they are distinguishable (open source). Any given person can be devoted to selling, whether they're coding or weaving baskets, but that doesn't suddenly mean that a core factor of weaving baskets is being able to sell them. It's not. That's a different skill set entirely. It may often not be found side by side.

It's a shame that we are so obsessed with money and selling right now that pure skills seem to have little value to some people, despite the fact that many crucial things nonetheless run off of these core skills...

This argument is equivalent to saying that only applied science has value.


All you say is true. And yet, if you don't get paid for doing open source you're probably spending more time of the week worrying about money than good code.

What do you think about the argument that once you have handled the money topic well enough, then you can worry about quality. I.e. first you make a basket business that earns enough money so you don't ever have to work again. Then you try to improve the process and result of making baskets as a hobby.


> The best ones are the ones that can push out a solution given too little time and given a (maybe self inflicted) bad code base.

If every open source project was developed by those principles, all high and mighty web companies would collapse.

Good work is being done, but it's for free.

EDIT: You're right of course about the reality in the parasitic companies.


Sorry to disappoint. But despite the rare exception here and there most open source development is either paid or quickly forgotten. If I put into google "Linus Torvalds income" the first sentence reads:

"Finnish-American software engineer and hacker Linus Torvalds has as estimated net worth of $150 million and an estimated annual salary of $10 million"

Money.


https://pbs.twimg.com/media/B4Xo3CdCcAA2xm1.jpg

He has about $20 million[1], + cca $90k a year from the linux foundation [2].

[1] https://www.quora.com/How-rich-is-Linus-Torvalds [2] That he gets a salary from linux foundation: https://www.engadget.com/2007/01/22/the-linux-foundation-for... (doesn't say amount) I cannot find the hight of his salary now, but I read it somewhere.


The man's name is "Linus" and outliers aren't good examples. Also, he has mostly been merging other people's work for quite some time.

CPython for example is mostly developed for free.


lol, it's true that most development in open source is paid. It's really easy to experience. Just work anywhere where people develop open source or make heavy use of it. But it's really hard to proof to someone who hasn't experienced it since nearly every example and statistic can be doubted. So doubt as much as you want. But if you think there's something worth learning out there, try to experience it and you'll see it's >90% paid work.


It is clear that you have no clue. Probably you think that the existence of "foundations" means that money arrives at the people doing the actual work.

I on the other hand work on a very large open source project.


One thing I've been thinking about more and more is code that's easy to delete.

For example, we recently built a data pipeline that did a bunch of processing and wrote data to a SQL database at the end. For various reasons, there was an unscalable, quick way to implement the write, and a scalable, slow way to do it. We wanted to get the product to testing ASAP, so we chose the quick way initially.

In order to make sure that we could easily replace that code, we ended up creating a separate write function for each table, where the function did nothing else except the write. That involved a lot of duplication, but made it easy to move the tables over to a better method one by one later.

It seems like having functions with one purpose, pure if possible, is a pretty good way to ensure "upgradeable" code–even if the internals of the function are messy, you just have to write a new one that copies the same functionality. Furthermore, I've found single responsibility functions to be easier to enforce in code reviews than single responsibility classes.


I would agree with that goal and extend with a different anecdote that the articles "project knowledge" section should include documenting the weird requirements and unusual interfaces to other projects.

Many times I've run into five year old program logic, pondered why in the world anyone (me) would had done something that weird, and realized the project or tool requiring that weirdness was cancelled three years ago long enough that I've forgotten about it.

Its not as simple sometimes as just baking into the single function that talks to the API, sometimes it gets baked into weird corners of the application logic.


What you are describing seems very familiar to the idea in this [0] talk that I saw mentioned in the other thread on HN. Basically, the speaker argues that any codebase has a tendency to become a big bloated mess. The proposed solution to that is writing highly modular code that you can easily rewrite from scratch in a week.

[0] https://vimeo.com/108441214


This is great! Much more fleshed out than my post.


I mean that's the single responsibility principle right? Classes should do one thing and one thing only and then your methods within the class should do one thing and one thing only. If I'm writing a list adapter for android it should only be a list adapter. It takes its data, interprets it and converts it to present. It doesn't also do data manipulation, that's handed off to another layer.

It means increased modularity.


* Giving excessive importance to estimates

Seen this one. It was used as a excuse not to do code reviews because they cause to miss estimates. "Look, the feature is done, but because of the code review requirement I cannot mark it 'done' in the project plan." Solution: abolish code reviews.

* Assuming that good process fixes bad people

This one is a big one. It's everywhere. Especially in big software development companies, such as Microsoft and Google. They tend to believe that once they institute a perfect process, everything works out perfectly. Perfect coding guidelines lead to perfect code, no matter who writes it. Perfect testing process - testing can be done by monkeys. Perfect project management process - now we can hire project managers with just basic Microsoft Excel skills. They don't understand that without actual talent the company enters a "spiral of death" which is impossible to escape.

* Ignoring proven practices such as code reviews and unit testing

This is done frequently by people who never tried such techniques as code reviews and unit testing. If you do it consistently through, say, one release cycle, you start to value those techniques and understand their importance.

Unit testing, for example, helps me to avoid painful debugging complex issues in production. All features that I unit tested usually just work when integrated into the rest of the product. In fact, the last bug I had to fix happened in code that I neglected to unit test, because the unit test setup was too complex for that component (in itself an indirect sign of a problem). Unit tests also lead to components usable independently of each other, thus reducing the overall system coupling.

* Hiring developers with no “people” skills

This is a double edged sword. On one hand a developer who can't communicate well will eventually produce code that doesn't do what's intended. On the other hand there are people with too much "people" skills who can't code shit. They just bullshit their way through. I'd say there are too many of such bullshitters. A lack of communication skills in a developer is a problem fairly isolated to that developer. A lack of coding skills in a bullshitter is much bigger problem that affect many people around him.


All good examples (such as excessive importance to deadlines, a big no-no).

My question: can they be truly good programmers of they write bad code? Isn't the essential product of a programmer SLOC and functionality via software? And if they do that in a manner inferior to another dev, isn't that objectively a measure of inferiority in their craft?

I've met far too many 'good' programmers who were a net detriment to a project not to be wary of the term.


Under insufficient time constraints, a good developer will either produce bad code or no code at all (he will resign from the company). This is extremely common. The stuff the author said about too much emphasis on deadlines is spot-on.

Some developers who code really fast might appear like they're being extremely productive but behind the scenes, you end up having a whole team of developers who are just fixing that developer's bugs.

For example, if a lead developer doesn't choose the right framework or plan/design the API correctly, then the consequences of that will keep piling up over time.

It's really easy to put the blame on people who are doing the 'small work' but in reality, they might be doing the best work possible under the terrible constraints imposed on them.


> Some developers who code really fast might appear like they're being extremely productive but behind the scenes, you end up having a whole team of developers who are just fixing that developer's bugs.

This is very prevalent at my job. There are a couple of developers who have a reputation of being really fast. "Wow, he closed 25 tickets the last hour!". In reality he mainly rejected them or made quick fixes which didn't work or created new bugs. A not insignificant amount of time in a recent project I had was focused on rewriting code said developer had written.


Yes and no. Crazy, made up deadlines will throw even the best developer off. They want to, fist and foremost get it done. If everything else is irrelevant, then that's what you will get. Of course that ends up being bad for everybody, but many businesses still don't have a clue on how to operate their most important department. Sometimes you have a hard deadline that you just can't help like a mission critical bug, or the VP of sales promised your most important client an impossible date and there are fines involved.

I'm a firm believer of setting aside time for refactoring every X number of releases. That solves a lot of this.


You know, it depends...

Impossible and strict deadlines simply because of cost control is shooting yourself in the foot.

Impossible deadlines because of a really important deal worth actual money, is probably worth sweating over,at the risk of code quality. If all projects tend to be the latter case, it's probably actually the first case packaged as the second...


Sure, sometimes it's just a matter of survival. Get this code released by this date or we sink. That's an unfortunate place to be in, requires a lot of focus under stress and is rarely rewarded. It's gratifying for me to be successful in those circumstances, but too many of them and I have to find a saner place to work. Management starts to take miracles as the norm by planning them (if they even do) and it leads to burnout.


Even good developers have to initially poke at a problem to get it in their head. Once there, then they can write beautiful code to solve it. Therefore, if time is constrained, even good developers will have to ship ugly (but working) code.


So true. A 'good' programmer who commits terrible code is a bad programmer. In other words, one trait of a senior developer is knowing to pick your battles.

If you can't do good work, you have to get out.


I think is is true at the earlier stages of one's career. One you get older and have to start feeding a family, the idealism wears off and you start finding yourself compromising your morals and coding shit to make a deadline. But the company you work for pays you so well, coupled with the expert domain knowledge that makes you nearly irreplaceable, you become complacent. You are a fixed cog in the system. At this point "get out" is not in your best interests.


No one said it was easy...


a good programmer should be able to identify bad companies to work for, which could solve many of the issues displayed in the article. it's much like choosing a good life partner - choosing a good life partner (instead of a crazy one) will save you from many troubles that you would encounter if you married the wrong person in the first place.

ps: i need to tell this more to myself. i already made bad choices regarding where i work, which forced me to quit those jobs at the end..

(edit: spacing)


I think a lot of this sentiment overlooks the fact that in most cities the number of software companies is very limited - perhaps allowing one switch in a career if there are huge problems or conflicts, but certainly not enough to allow switching due to e.g poor development practices. What's worse, in these places the poor devs tend to stick around while talented ones leave because they either accept moving or they can find remote work due to good connections or nice SO/GH profiles.


you are removing a lot of personal responsibility from the guy who is looking for the job. If you are born in a 'bad' place (regardless of the reason for it being 'bad' - a violent/poor neighborhood, a city without hope for jobs, a small country-side town with no tech industry at all, etc. etc.) - you should be responsible for improving your life/moving to a better city/etc.

to analogize from soccer: Messi plays at Bercelona, but is from argentina. Ronaldo plays at Real Radrid, but is from portugal. if ronaldo stayed in portugal, which teams would he be able to play with that would even become close to matching real madrid ? same goes for messi and barcelona. he could have said something like 'i was born in argentina, and no one here plays soccer like barceolna/real madrid. therefore - i will not play soccer because i was born in the wrong place'


I think a lot of what you are saying makes sense, but also at some age and family status (kids in schools etc) moving doesn't make sense any more, or has more downsides than one is trying to avoid.

Obviously one could have thought of this before marrying and putting kids in school in the town with one tech firm - but that doesn't make it any less a reality for a lot of people.

Saying "kids we have to move because dads colleagues refuse to do proper peer reviews" just doesn't taste right :)


> Obviously one could have thought of this before marrying and putting kids in school in the town with one tech firm

this was going to be my response, but then you said it yourself :)

The more intelligent and self-responsible act (which is hard, i know) would be to move to a better place before making extremely serious life choices such as marrying and having kids. same goes for partner finding - don't marry a crazy partner and then say 'I married a crazy partner and now we have 2 mutual children. i cannot leave, i am stuck with this crazy person who is also the parent of my children.'

and i know it is hard to find a non-'crazy' employer and a non-'crazy' partner- but this should be your goal and target, shouldn't it ?

also, you get better as time goes by, even if you don't want to (as long as you are a bit intelligent) - bad companies/life partners will present themselves as red-flags on an interview/date . and it is your responsibility to detect those red flags .

(edit: refactor the last paragraph)


I guess what I'm saying is that a) job market is only one factor in deciding on where to live. A partners job situation, access to other things in life, being close to family etc often weighs in, and is often even more important than other factors. Obviously after such a choice one shouldn't be whining about limited job market - but the point is that only improving things by leaving bad companies is not always the solution. For many it's important to be able to change bad culture as well.

aspect b) is that company environments change, and the fun startup can become a terrible enterprise in a decade.

I wouldn't advise against working in tech in small towns because of the risk of getting stuck in the only gig in town. I would however advise that it be factored into that career decision. One might need to fight to improve company culture, whereas in Silicon Valley one would instead take a job across the street. I'd also recommend keeping an active contact network and online profile so you can get remote work should it be necessary.


> If you are born in a 'bad' place (regardless of the reason for it being 'bad' - a violent/poor neighborhood, a city without hope for jobs, a small country-side town with no tech industry at all, etc. etc.) - you should be responsible for improving your life/moving to a better city/etc.

Most people born in bad places can't get out of them in large part because they're bad, since the effects of that propagate through. Why should the responsibility for a bad place, and subsequently improving it or getting out of it be placed on a person who wasn't the one to make it bad?

People who can get out of bad places are the lucky ones.


Given that "bad programmer" is a loaded term, I really don't think it's a good idea to expand it so much.


Imagine you have a sales team. Initially you tell them: sell $10000 in 1 week. Most of them will go through the traditional selling process. One of them will make a loan on behalf of the company for $10000, and hand you the money right away.

If you are smart, you will say that you are not interested in having that money if it comes from a loan. If you are not smart, you will say: "wow, this guy is a 10x salesman, we will give him a bonus". Meanwhile the company gets into an unpayable debt beyond any possibility of paying.

Then, when the company is about to die from debt, they declare bankrupcy and start over or sell themselves to be acquired.

Now, imagine it's not salesmen, but software engineers, and it's not actual debt, but technical debt, and it's not bankruptcy but starting your project again.


The other way terrible code gets written by good developers is focusing more on deliver working code that meets the business needs, more than elegant code that meets the needs of future maintainers.

Ideal code meets both. I'm sure we all agree on that. But when working on deadlines, under pressure, with poor management, you sometimes write bad code. And if the code is 15 years old, that great engineer you are talking about today was, at the time, an inexperienced new coder. So you didn't inherit his greatest work. You inherited his embarrassment where he made mistakes.

I suspect the original authors of the terrible code in question could give much deeper insights into exactly how and why it was done that way.


Perhaps the team planned a rewrite long ago and stopped worrying about the quality of their code, which seemed obsolete in the moment it was written. Over the years, however, there was always something more important than the "soon-to-be" rewrite.


Converting Python to node.js seems terrible in itself. Converting to Go might be useful, if you need more performance or scalability.


[citation needed]


> When I found out I would be working on porting an old Python codebase to Node

"Out of the frying pan and into the fire" is not a programming direction I would recommend. (I don't think I've met a single Node developer who isn't bitching about it.)


A number of points in the post/article are questionable.

First, it assumes the developers had substantial control over the schedule for the project ("Giving excessive importance to estimates"). Certainly in my experience this is unusual. More frequently, the schedule is dictated by management, frequently by sales/marketing executives in commercial software development. It is very difficult to push back and a good way to lose your job.

Sales: We have closed this great deal with BigCorp. Can you do X (complicated, challenging software project) by the end of the quarter?

Developers: Err, um, X sounds like a project that will take six months.

Sales: We really need to make our quarterly numbers. Our CEO Bob used to be a developer and he says any competent programmer can do it and we only hire the best. Competent doesn’t cut it here! You are a rockstar ninja, aren’t you? Can you prove you can’t do it by the end of the quarter?

Developers: Well, no. The schedules are driven by some unexpected problem or problems that usually happen. But, well, if nothing unexpected happens, we can do it by the end of the quarter.

Sales: Great! Bob is expecting results by the end of the quarter.

So much for the beautiful, elegant software design methodologies taught in college and university CS programs and peddled by high priced consultants.

Second (“Giving no importance to project knowledge”), high technology employers seem to have extremely high turnover rates of software developers and other employers. Payscale produced a study claiming that the average employee tenure and Amazon and Google is only one year. Many companies seem to target employees with more than seven years of paid work experience — Logan’s Run style — for layoffs and “constructive discharge,” (https://en.wikipedia.org/wiki/Constructive_dismissal) where employees are made uncomfortable and quit “voluntarily.” Undoubtedly, this is costly as the author implies, but it seems to be common practice.

Yes, metrics like “issues closed,” “commits per day,” or “lines of code” don’t work very well. Once employees realize they are being tracked and evaluated on some metric, they have a strong motivation to figure out how to manipulate the metric. Even if the employees don’t try to manipulate the metrics, the metrics all have serious weaknesses and map imperfectly to value added (biz speak).

Third, are code reviews and unit testing proven processes especially for normal non-Microsoft companies? In the early days of Test Driven Development (TDD), Kent Beck and his colleagues made numerous claims about the success of Test Driven Development in the Chrysler Comprehensive Compensation System (C3) payroll project, an attempt to create a unified company wide payroll system for Chrysler. This project in fact had a range of problems and was eventually cancelled by Chrysler in 2000, without replacing the Chrysler payroll systems successfully.

As the problems with C3 have become well documented and well known, TDD enthusiasts have shifted to citing studies at Microsoft and some other gigantic companies that claim practices like TDD and code reviews work well. Are these really true or do these case studies have hidden issues as C3 did?

Further, Microsoft, Google, and other companies that have played a big role in promoting these practices are very unusual companies, phenomenally successful super-unicorns with sales in the range of 40-100 billion (with a B) dollars with near monopoly positions and anomalously high revenues and frequently profits per employee. Microsoft claims to have revenues of $732,224 per employee. Google claims an astonishing $1,154,896 per employee. (http://www.businessinsider.com/top-tech-companies-revenue-pe...) This compares to $100-200,000 per employee for most successful companies.

Fergus Henderson at Google recently published an article on Google’s software engineering practices (https://arxiv.org/abs/1702.01715) with the following statements:

2.11. Frequent rewrites

Most software at Google gets rewritten every few years.

This may seem incredibly costly. Indeed, it does consume a large fraction of Google’s resources.

Note: “incredibly costly”

Companies like Microsoft and Google have enormous resources including monopoly power and can follow practices that are extremely costly and inefficient, which may work for them. Even if these practices are quite harmful, they have the resources to succeed nonetheless — at least for the immediate future, the next five years.

From a business point of view, it may even be in the interests of Microsoft, Google, and other giant near monopolies to promote software development practices that smaller competitors and potential competitors simply can’t afford and that will bankrupt them if adopted.

Both code reviews and unit tests are clearly time consuming up front. Code reviews using tools like Google’s Gerrit or Phabricator (a spin-off from Facebook, another super-unicorn) are committee meetings on every line of code.

Regarding:

Imagine my dismay when I had to collaborate with a colleague on that legacy project and his screen displayed Notepad in its full glory. Using “search” to find methods might have been rad back in the nineties, but these days, refraining from using tools such as modern IDEs, version control and code inspection will set you back tremendously. They are now absolutely required for projects of any size.

Using “search” to find methods was not rad back in the 1990’s. IDE’s and code browsers specifically have been in widespread use since the 1980’s. Turbo Pascal (https://en.wikipedia.org/wiki/Turbo_Pascal) was introduced in 1983 and featured a fully functional IDE, soon to be followed by IDE’s in many other products. Version control dates back at least to SCCS (https://en.wikipedia.org/wiki/Source_Code_Control_System) which was released in 1972. RCS was released in 1981 and version control was common in the 1980s and since.

Code reviews have been around for a long time. However, in the 1990’s and earlier they were restricted to relatively special projects such as the Space Shuttle avionics where very high levels or safety and reliability, far beyond most commercial software, were required. This speaks to the “incredibly costly” quote about Google above.

Without more context, it is difficult to evaluate the use of Notepad. Simple code/text editors like Notepad and vim (formerly vi ) are very fast to start up and can be a better option for some quick projects than starting an IDE.

Some IDE’s are particularly hard to use. Early versions of Apple’s Xcode circa 2010 were particularly difficult to use in practice; it has improved somewhat in the current releases.

People vary significantly. Some developers seem to find stripped down tools like vim or Notepad or Notepad++ (on Windows) a better option than complicated IDE’s. I am more of an emacs or IDE person.

The fact that someone else works differently than you do does not mean they are worse (or better) than you. The fact that something works well for someone else also does not mean it will work well for you — or vice versa.

There are sound reasons for duplicating code, cutting and pasting, rather than creating a function or object called in several location in the code. If the developer anticipates that the code may subsequently diverge, then duplication is often best.

Like grand master chess players, highly experienced developers, especially under tight time constraints (like a chess tournament), code by intuition, not by laboriously reasoning out every step. If it feels like the code is likely to diverge in the future, duplicate. If it does not diverge, no problem, it can be merged back later if needed.

In the bad old days of structured design (1980’s) and object-oriented design (OOD — 1990s), software development projects suffered from Big Design Up Front (BDUF), grandiose attempts to design a perfect software system before writing a line of code. This often resulted in massive cost and schedule overruns and total failures. It often proves better to just throw (“hack”) something together quickly — a prototype, proof of concept, Minimum Viable Product (MVP). Just “get something working.”

Inevitably these prototypes and early stage software projects are going to compare poorly to some theoretical perfectly designed system with 20-20 hindsight. That is what seduced people into BDUF twenty, thirty years ago.

Modern Agile software development methodologies are foolishly trying to have it both ways, have an initial quick iteration BUT that first iteration should be perfectly designed up front — beautiful, elegant, with hundreds of tests, endless committee meetings on coding style and design (code reviews), all sorts of supposed best practices, no code duplication, etc. This is a seductive fantasy doomed to fail in most cases.


I sort of lost the will to read further at this point:

"An important component of this project was the focus on deadlines, even to the detriment of code quality. If your developers have to focus on delivering rather than on writing good code, they will eventually have to compensate to make you happy. "

AAAAAAAARGH!!! Seriously? What do you think you're there for?

Good code can be a means but never an end. Of course you're there to deliver. And of course it's more important to do that than to write beautiful code.

Give me strength.


"The authors created their own framework..."

I cannot say this was an issue here, but I can say that some of the worst messes I have seen have followed from this decision. In all such cases in my experience, it was not a technically justifiable decision, and I strongly suspect it was driven by developer ego and overconfidence.

The very worst included a roll-your-own language.


Eh, DSL scripting languages have a right to exist.


If someone was suggesting otherwise, you might have a point worth raising.

But now that you have introduced this non-sequitur, I will freely assert that our rights trump those of bad DSLs to exist. I suspect that if you had seen the specific case I am referring to, you would agree.


We've all done it though.


The key word is "deadline". Horrible code that works beats beautiful code that has not been thoroughly tested.


Along those same thoughts, you put many developers in a room and they end up converging on "the one true architecture" and nothing ever ships but you have this monstrosity with some very beautiful code in it. Put few developers in a room and you get decent code and a solution to the problem that actually ships.


I work with a team that managed to write the code he describes in a little over three years. it's really just chaos and I only trust a few of them. So why do I keep talking myself out of looking for a new job? Hell if I know.


"mixed spaces/tabs for indentation"

If that's the second thing in the list of problems, the problems aren't nearly so bad or the author's got a savage case of mixing in the trivial with the important.


I'd say it's like smelling smoke. If your team can't even agree on a standard indent and stick to it, and you haven't got a linter which picks it up, what else are they missing?


I had a quick look at your commentary, its great, the whole issue of rushing code and debugging forever, also nice allusion to anti patterns :) I have bookmarked and will read properly very soon, maybe


I know how. Cause I just wanna get something done!

Then later I come back and rewrite it with a dose of patience. Then my code is much better.


Mirror?



This is a fucking arrogant article. I would love to see this blogger try to maintain a large codebase after several development cycles in the real world. To come into someone else's codebase that has umpteen number of iterations and tough business realities like trying to make money, and proclaiming that it's terrible code and how he would fix it, is delusional and self-aggrandizing.

Most production code needs to be revamped every few years because the subsequent unforeseen functionalities that was forced by product managers and customers. That code is usually going against the grain of the original code but you can't blame the original designers because it was never spec'ed out. The best code is the code that is easiest to manipulate and modify, but even then it gets old and needs a rewrite. There's nothing wrong with that, except thinking you can comment on the shitty code and thinking your above it and how you would never let it happen.


> This is a fucking arrogant article.

Please don't. Even if you're right, it steers discussion in a ranty direction.


> The best code is the code that is easiest to manipulate and modify

this great (IMO) article comes to mind: http://programmingisterrible.com/post/139222674273/write-cod... :

"Write code that is easy to delete, not easy to extend"


Excellent article, thanks.


> To come into someone else's codebase that has umpteen number of iterations and tough business realities like trying to make money, and proclaiming that it's terrible code and how he would fix it, is delusional and self-aggrandizing.

And yet that is exactly the reality one faces in such projects (because the only valid code metric is "WTFs/minute" [1]. Point is, I've never heard anybody complaining about arrogance until now - usually the devs are very well aware where and how their code sucks and they usually are overly apologetic even though I understand the constraints the code was written under (some of which you described).

[1] http://www.osnews.com/images/comics/wtfm.jpg


The site is now 404'ing so I guess s/he isn't heeding their own warning?


500'ing for me. A lot of websites ending up on HN frontpage aren't dimensioned for the traffic it implies, I won't blame them for it


Whoops, I think it was a bad CloudFlare setting. Should be working now, hopefully.


>porting an old Python codebase to Node

Node is how.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: