If something is broken in code that I'm interfacing with, I'd like for my tests to reflect that. I've never understood the "testing little things in isolation" strategy when it precludes a "test everything all working together" strategy.
The reason is that if you can write a test that specifies your unit in terms of its communication with collaborators, then you have achieved a rational and comprehensible unit of code, and so you can be more sure that your design is sound.
With test-driven design (in an ideal world), if a change to an important module can break its own tests only, that means that it has a shallow and clear interface, decoupling its internals from the rest of the application.
Given that the module's code is exhaustively tested, that means there is a full specification of what it's supposed to do, given the various possible scenarios.
If the tests are well-written, that means the specification is comprehensible, which means the interfaces make sense in the domain's discourse.
Good TDD involves working hard to keep the code base clean and comprehensible. If you're only testing that everything works together, there's a chance that you're also less focused on maintaining good architecture.
Yeah, I get that. But it makes me wonder why "unit testing" became such a hot thing. I think it's just because it's simpler than integration testing. But integration testing is much more valuable.
True, it's much simpler. It's also much more interesting when you start having a huge codebase, and you want to refactor/add a feature/fix a bug, because you don't need to set and verify the whole input/output of your application but only the one that does the logic you're interested in.
It's also simpler to use unit tests for verifying all possible inputs and outputs of a component.
But, as others have said, unit tests alone are necessary but not sufficient.
> But it makes me wonder why "unit testing" became such a hot thing. I think it's just because it's simpler than integration testing. But integration testing is much more valuable.
TDD focuses on a particular method of leveraging unit testing to improve code quality and avoid bloat, but it doesn't suggest that integration testing should be abandoned. Integration testing is just outside of the scope of the part of the dev process TDD is focussed on improving.
Unit testing has the benefit of telling you more specifically which bit of code is broken. That is more valuable when it finds the error. Integration testing will find more errors. As dragonwriter says, though, the focus of TDD is not the fact the tests find errors but the way they shape your development. For that end, it's not clear to me (at all) which is "more valuable".
> I've never understood the "testing little things in isolation" strategy when it precludes a "test everything all working together" strategy.
It doesn't. TDD is largely about how to use unit tests to drive incremental development, but it certainly doesn't preclude any other form of testing being part of the lifecycle (it just doesn't have anything to say about them -- presuming that you have some integration and acceptance test practices, and that those are out of TDD's scope.)