What matters is the development process - local build & test should be fast.
Otherwise, with CI/CD, it's a continually-moving release train where changes get pushed, built, tested, and deployed non-stop and automatically without human intervention. Once you remove humans from the process, and you have guard rails (quality) built into the process, it doesn't matter if your release process for a single change takes 1min, 1hour, or 1day.
Even if it takes 1 day to release commit A, that's OK b/c 10min later commit B has been released (because it was pushed 10min after commit A).
I've seen pipelines that take 2 weeks to complete because they are deploying to regions all over the world - the first region deploys within an hour, and the next 2weeks are spent serially (and automatically) rolling out to the remaining regions at a measured pace.
If any deployment fails (either directly, or indirectly as measured by metrics) then it's rolled back and the pipeline is stopped until the issue is fixed.
[1] Yes, even for fixing production issues. You should have a fast rollback process for fixing bad pushes and not rely on pushing new patches.
Agreed! Everything the CI system does, should be runnable locally by the developer before it gets pushed.
It's a belt & suspenders approach - when you push a change, you want to have already tested it to a high degree of confidence because the feedback loop from CI back to the developer is too slow.
Effort spent moving all testing to the left supports faster iterations through shorter feedback loops. Creating stubs/mocks, HALs, etc. are all good investments.
Figure out you can easily clone the CI tasks and run them locally, and then build tools for developers to do that easily for every change :)
For too many people don’t get this. I’ve been pulled into projects that had a terrible pipeline and devs that were pushing commits and wasting so much time waiting to get feedback about very simple problems. This kills rapid development.
It took some redesign, but I was finally able to demonstrate how much could be done locally before pushes. Some just need their eyes opened up.
CI is fundamentally about feedback loops. The timing of the feedback is second only to the reliability of the feedback. Unfortunately a lot of people don’t achieve either. The worst use the consequences as a way to complain about CI.
Yes, if you don’t know what something is for, you’re not going to enjoy using it.
The system I described is actually a cloud system, and we had both stubs and mocks of all our dependences (which is easy, because they were other cloud systems and we could easily stand up a fake service with the same API when doing integration tests, or switch to use local data when doing unit tests).
We also performed testing against live dependencies but with test accounts to ensure that our stubs/mocks were accurate and up-to-date, and captured realistic interactions (and failures).
I've done the same with hardware systems, again using stubs/mocks of HW dependencies for unit tests and then using actual HW for integration testing.
The time spent investing in stubs/mock quickly pays dividends in both increased development speed and test coverage, especially as you can inject faults and failures (bad data, timeouts, auth failures, corruption, etc).
I also read about tests that compare the mockups and stubs with the original implementations, but it sounded like overkill to write tests for your tests.
Yes, at some point you get diminishing ROI :) We were OK with having the pipeline fail due to a change in the live API that our mocks didn't emulate, then we'd go update our mocks (and fix the code).
It happened infrequently enough that it wasn't worth the effort to automate the testing of the mocks against the live APIs.
Though honestly, for internal APIs owned by sister teams, this was usually due to a bug/non-backward-compatible change on their side, and we'd work with them to fix their APIs.
What matters is the development process - local build & test should be fast.
Otherwise, with CI/CD, it's a continually-moving release train where changes get pushed, built, tested, and deployed non-stop and automatically without human intervention. Once you remove humans from the process, and you have guard rails (quality) built into the process, it doesn't matter if your release process for a single change takes 1min, 1hour, or 1day.
Even if it takes 1 day to release commit A, that's OK b/c 10min later commit B has been released (because it was pushed 10min after commit A).
I've seen pipelines that take 2 weeks to complete because they are deploying to regions all over the world - the first region deploys within an hour, and the next 2weeks are spent serially (and automatically) rolling out to the remaining regions at a measured pace.
If any deployment fails (either directly, or indirectly as measured by metrics) then it's rolled back and the pipeline is stopped until the issue is fixed.
[1] Yes, even for fixing production issues. You should have a fast rollback process for fixing bad pushes and not rely on pushing new patches.