Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This struck a nerve.

I've been on both sides of this war: the test evangelist fighting for coverage and the pragmatist shipping to beat a deadline. After 20+ years in software, the truth is painfully obvious: testing is the greatest productivity hack that everyone keeps "postponing until next sprint."

The author gets the psychology exactly right. We overestimate the initial cost and drastically undervalue the compound returns. What they call "Time Technical Debt" is the perfect description for that sinking feeling when you're working on a mature codebase with spotty test coverage.

The most insightful point is how testing fundamentally changes your design for the better. When you have to make something testable, you're forced to:

- Think about clear interfaces

- Handle edge cases explicitly

- Create clean separation of concerns

- Build proper startup/shutdown sequences

These aren't "testing best practices," they're just good engineering. Testing is simply the pressure that forces you to do it right.

My experience: if your system is hard to test, it's probably hard to reason about, hard to maintain, and hard to extend. The difficulty in testing is a symptom, not the disease.

At my last company, we built a graph of when outages occurred versus test coverage by service. The correlation was so obvious it became our most effective tool for convincing management to allocate time for testing.



While I agree testing in general is a must, I'm still not sold on unit testing as a general tool.

For stuff like core libraries or say compilers, sure, unit tests are great. But for the levels above I'm leaning towards integration tests first, and then possibly add unit tests if needed.

After all, you can't not have integration tests. No matter how perfect your Lego bricks are, you can still assemble them together the wrong way around.


I tend to think that the most valuable tests are on the exact level of when something gets used by multiple consumers.

So shared functions get unit tested, but if there is a non shared function that gets triggered only from few layers up via user click, it is better to simulate that click and assert accordingly. However the exceptions are when the input can have tons of different permutations, when unit test might just become more optimal then.


Yes exactly. If you're writing a library, unit tests are likely a great investment. Your "customers" are using the things you are testing.

If you're writing an API, your customers are calling the endpoints. Very important they behave properly on happy path and fail appropriately when they should (permissions / security especially)

(I also ensure tests in the service/dao layer to protect developers from footguns and/or more "unexpected" behavior, but I'd argue the API layer is generally more important)

If you're writing a react app, playwright is very important (and using it properly / acting like a real user) as your customers are interacting with it via a browser.

Though the larger the codebase/ when you start having reusable components, the better tested they are, the less will go wrong in the future as an unfamiliar developer will inevitably hold it wrong.


Agree - unit tests are best for pure functions and the like. If you are having to do a ton of mocking and injection in order to unit test something, it's probably a sign black box testing might be higher value


Integration test are slow/expensive to run compared to unit tests and reduce your iteration speed.

Unit tests let you change code fearlessly with instant feedback.

Integration tests require basically deploying your app, and when something fails you have to debug the root cause.

If you’re doing a lot of mocking then your design is not good. And only public interfaces should have testing.


On every large codebase I’ve worked on , updating a low level function has required more work updating the tests than updating the application using it.

Unit tests have a place, but IME are overused as a crutch to avoid writing useful bigger tests which require knowing what your app does rather than just your function.

> Integration test are slow/expensive to run compared to unit tests and reduce your iteration speed.

My unit tests might take under a second to run but they’re not the ones (IME) that fail when you’re writing code. Meanwhile, integration tests _will_ find regressions in your imperfect code when you change it. I currently use c# with containers and there’s a startup time but my integration tests still run quickly enough that I can run them 10s or hundreds of times a day very comfortably.

> If you’re doing a lot of mocking then your design is not good.

This is the “you’re holding it wrong” argument. I’ve never seen a heavily unit tested codebase that didn’t either suffer from massive mocking or wasn’t decomposed into such small blocks that they were illogically small and the mental map of the project was like drawing with sand - completely untenable.

> And only public interfaces should have testing.

This is the smoking gun in your comment - I actually disagree and think you should infer this. Services (or whatever you call them) should be tested, and low level functionality should be tested but the stuff in the middle is where you spend 80% of your time and get 10% of the benefit


> Unit tests let you change code fearlessly with instant feedback.

Sure they can add confidence in making changes. But I've seen they can also give you false confidence.

As I said, I can still assemble your perfect Lego bricks together wrong. And you can still change the public behavior while keeping your unit tests passing fine. I've seen both happen in practice.

That's why I think integration tests (or whatever you want to call them) give you more bang for your buck. They give you even greater confidence that your stuff works, and generally you can cover a lot more with far fewer tests so improves ROI.

The tradeoff is that it can take a bit longer to run.

> If you’re doing a lot of mocking then your design is not good.

If my app needs to talk to a database, store things in object store, send some messages in a message queue and so on, I can't not mock those things away if I'm to write unit tests.


I think in general tests should be where errors and mistakes are more likely to occur. Different code bases could be different! Hard core math libraries are different than a web app with various integrations.


IME end-to-end tests in a browser really help with services that have a lot of parts to integrate, but damn they are hard to make reliable.

One challenge is animation and timing races, supposedly Playwright can address many of those. Another is some infrastructure like GitHub Actions can be randomly resource starved, such as causing the Chrome Driver to become unresponsive. Automated retrying is one workaround, at the cost of possibly papering over rare race and timing issues.

Of course unit tests are nice and fast and narrow. But refactors could render a large portion obsolete, and they won't prove things work together as a whole.


Do you think early startup product might be an exception?

Since whole features can be quickly ditched frequentl. Sometimes even complete product pivot.


No excuses. I've done this like 12 times as startup CTO.

The habit is important. If you don't start, after three pivots you'll have a huge mountain of tests for a system nobody understands. Plus all the wasted time manually "testing".

Tests are so critical for the success of the business. It's fiscally irresponsible to skip.


What about side projects that you are working alone on?


I find them vital for side projects - especially if they get set aside for a week/month/year, I sometimes will lose track of assumptions I’ve made or use cases for apis that tests tend to expose.

Sure you can encode all of that as comments, but unless you reread each file when you return from a break, you can’t always trace those thoughts and see where they lead. On the other hand if you “find all references” in your ide or change some implementation so that a test breaks, past-you can save the day with that extra information about what they intended at the time.


I really value tests on projects I work on alone because the work is usually intermittent. I don't have the time or energy to deal with regressions, and I may not remember everything about the system by the next time I work on it.

I also find manually testing to be tedious, so I'd rather spend that time writing code that does it for me.


Well, I don't skip those (but, I started my career (95) in test). I think that it would be OK.

But if the objective is a professional output, test, test, test.


No, I don't think there is any exception. If you intend to maintain a piece of software for any length of time (i.e. it's not just a throwaway demo), you should write tests for it.

Over time you realize that testing truly does not slow down development as much as many people think it does. Maybe devs who just aren't used to testing find it difficult, but after a while it becomes second nature.

The best thing an early startup CTO can do is enforce testing across the board, so people don't just test when they feel like it.


>Over time you realize that testing truly does not slow down development as much as many people think it does.

Not my experience. We just built a new codebase, rewriting an older project with typescript and all the modern libraries and conveniences. We spent about 2x more time writing the tests than we did any of the API code.

Tests can be so fiddly and not exactly straight-forward. It takes a lot of time, but that isn't a reason not to do it. But don't suggest it's going to take less time, even in the long run, because it isn't - you essentially have to maintain 2 codebases now, one for the actual code, and one for the tests. Both are points of failure and both can be a time-sink.


Only if your runway is measured in days rather than weeks or months.

The payback for good testing is very fast, especially once you have set it up for the first feature.


I an early start-up is the exception, but my boss didn't. We were still in "stealth mode" and the CTO wanted 100% test coverage on our nodejs based social website, from the very start. 6 months in and we didn't have all that much built, because they couldn't really decide what they wanted us to build. So we built the most well-tested email sign-up form that ever existed, and a bunch of other user-account related stuff too, but then the company completely pivoted at around 6 months and I was now somehow doing PHP programming (which I hate) hacking the code of some ad server and bolting it on to a mobile app (not what we set out to build), and at that point the requirement for tests had been forgotten, because the company was desperate to find any viable path forward. It dissolved about 3 months after that, and now those tests seem pretty pointless.


100% coverage is a vanity metric and a waste of time.

Focus on testing the use cases that are important to users, not on covering every single line.


Any number under 100% is also a vanity metric. Focus on why there are test holes and whether they matter.


For me personally tests have a positive ROI within hours.

Even if I was doing a one day hackathon I'd probably have some sort of test feedback loop.

I've dealt with P1 bugs that cost the company 100k/minute and still took the time to write a test for the fix because you really don't have time to get the fix wrong and not find out until it is deployed.


Whenever I’m told there isn’t time, I remind the PM :Yes, but if it’s wrong or doesn’t work we will have time to do it over - it will just cost more later.


>The most insightful point is how testing fundamentally changes your design for the better. When you have to make something testable.

When people say this type of thing I consider it to be kind of a code smell that they're testing at too low a level and tightly coupling their tests to their implementation.

It is true that the pain of tightly coupling your code to your tests to your implementation can drive you to unwind some of that tight coupling but that still leaves your tests and your code tightly coupled.

I find the best bang for my buck are tests that are run at a high enough level to make it possible to refactor a lot and safely with a minimum of test changes. Technical debt that is covered by tests doesnt compound at nearly the same rate.


- Think about clear interfaces

- Handle edge cases explicitly

- Create clean separation of concerns

- Build proper startup/shutdown sequences

If you do all of these things to start with though, then what's the value proposition?


Refactor simplicity, regression reduction, reduced time to “next launch” because manual validation period is shorter.

Increased customer trust because fewer regressions get missed.


> Create clean separation of concerns

Software that managed to do this is very very rare. In fact I can't even think of any that I've seen.

> Handle edge cases explicitly

This is where the bread and butter in practice is though.


how do you know that you did:

- thought about clear interfaces?

- created clean separation of concerns?

- built proper startup and shutdown sequences?

:)


I think testing as a cost and manual QA has another cost. Depending on the cost over time (maintenance or manual QA) and the upfront cost, as well as the risk of refactors, which needs tests, there are moments where one option is better than the other. It changes, so it is worth considering.


I dont think they replace each other, they are complementary - they do overlap though, which is why people think you can trade them off - a test is not creative, a human is not an automaton.


We have been using humans for automations for a long time, factories have been a thing for quite a while now.

They have an intersection, the fact is that you might need only the benefits in the intersection, not all the benefits. Given that, it could make sense.

I have way more fun writing tests, but automating certain things have a steep maintenence cost, a checklist for humans is a better idea in that case


Author of the original post here. Thanks for such thoughtful comments and reactions. I'm glad I "stuck a nerve" for at least one person!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: