Have you ever worked on a team that was afraid of the software they were building?
Unfortunately, I had.
We had no-deployment Fridays, no-deployment holidays, afternoons, and vacation season. If more than one person from the team wasn’t available during the deployment, we were postponing it.
Usually, we started all deployments in the morning. In case of any problems, we had the entire day to fix them. It was pretty common to spend the whole day fixing issues after deployment.
Also, we had an error-watcher function in the team. Every week, a different person was responsible for checking whether everything worked.
In a different company, I worked on a team who were comfortable deploying changes on Friday, 15 minutes before the end of the workday.
We didn’t have a designated person checking whether the code worked. Of course, we still had bugs, but those were truly extraordinary events.
We always had a root-cause analysis session after every bug, and we were fixing the underlying problems.
We couldn’t have such sessions in the team afraid of the software. We would spend all our time either fixing bugs or discussing their causes. Occasionally, the root-cause discussions would get interrupted by a new bug.
What was the difference between those teams?
You won’t like the answer.
The fearful team focused on producing more and more code. Most of the code didn’t work well, and every change broke something added earlier, but it didn’t matter. We were getting slowed down by bugs. At some point, we didn’t have time for anything besides debugging and slow code production. You could forget all the improvement ideas.
We had unit tests, but we claimed they didn’t work and slowed us down. It was true because we wrote those tests incorrectly. We added the tests after the implementation, so the developer had never seen them fail. Many of those tests didn’t verify relevant parts of the code. Nobody knew about it because we added them to the already finished code.
We had to fiddle with the tests to make them fit the production code. Because of that, most tests verified single methods one by one, using mocks for all their dependencies.
Such tests are useless!
We did everything we could to create tests that would slow us down and then complained about being slowed by the tests.
You can’t refactor anything if every change breaks many tests because you changed the API of some internal method. You’ll never detect problems caused by interactions between methods if you mock all dependencies in every test.
The productive team worked differently
In the other team, we were writing the tests before the production code most of the time. We weren’t following the TDD practice to the letter, but it was close enough.
Because we started with a failing test, we knew what change we must observe to ensure the code worked correctly. Then we could modify the code to cause such a change in the data. It was enough to go through the red-green stages of the red-green-refactor workflow.
But we did even more!
In this team, we understood the true meaning of the expression “unit test.” Unit tests don’t mean testing every unit of code but rather every unit of behavior. We didn’t mock every dependency in every method. Our tests didn’t prevent us from refactoring the code. Therefore, we could confidently go through the entire red-green-refactor process.
Properly written tests are a treasure, but it isn’t enough either.
We also need documentation
Sure, you can use the tests as some documentation. Yet, I prefer drawing a class diagram showing interactions between a large part of the system at a glance. Figuring out the interactions from the code and tests would be possible, but it would take significantly more time.
Programmers complain that the documentation gets outdated quickly.
It’s a lie.
Don’t blame the documentation. Blame the programmers who made code changes and didn’t update the documentation.
The fast and productive team practiced updating the documentation as often as possible. Preferably while working on a code change. If that wasn’t possible or convenient, we were doing the documentation changes at the end of a month.
At worst, the documentation was outdated by a month. People usually remember what was changed during the last four weeks, so it wasn’t a big deal.
Automation improves everything
The last difference between those two teams was automation. The afraid programmers were doing everything manually and slightly differently every time.
The productive team documented procedures that we followed step-by-step every time we had to do a manual change. If we often used a particular procedure, we created a script automating the steps.
The biggest trap in software engineering
I feel sorry for all teams that claim they don’t have time to write tests or documentation, but they’ll fix the problems someday. They won’t. It’ll only get worse and worse. At some point, programmers will quit, and the next team will end up in an even worse situation. The new team will work with the untested code without documentation and no “tribe knowledge.”
Thousands of senior software engineers have never seen a properly developed project. They think the only way to make the software work is through extensive debugging, working overtime, and lots of manual testing.
That should terrify everyone who uses software or pays developers.
Especially since writing code properly doesn’t require any special skills or certification. Everyone can do it.
Everyone already does all of the required steps. But in the wrong order!
I assume you write the tests even if you do it after the code change. You should write them before you start changing the code. All-day. Every day.
It’s all you need.
That one practice is sufficient to get good quality code. It also frees enough time to write documentation and automate the processes.