Peter Morlion logo


How to Use Code Coverage to Your Advantage

Code coverage is an interesting tool that has received quite a bit of bad press over the years. In fact, ever since it became easy to see the code coverage of your tests, there have been fanatic supporters and rejectors. As is so often the case, the value lies in how you use it.

Many discussions on how to use code coverage focus on either the minimum percentage or which type of coverage to use.

The Percentage

It’s a well-established fact that striving for 100% code coverage is silly, if not detrimental. This is the case because there are either pieces in code that are difficult to test, or because developers will have to waste time writing tests for trivial pieces of code that can’t fail.

Then what percentage is good? It depends. Read on and I’ll give you my opinion.

The Type

Branch, line, or statement coverage? Certainly, if you use code coverage, this is a useful discussion. Line coverage is how I was introduced to code coverage and I believe it was the “original” approach to coverage. But very soon the testing world realized they could have covered a line of code, but not all statements in the code.

So it seems that looking at the percentage of covered statements is more worthwhile. But then you could have a high percentage of statement coverage and still miss important pieces in your code because certain branches in the logical flow weren’t executed.

So we move to branch coverage. A high percentage of branch coverage will also execute a high percentage of statements.

So there we have it: use branch coverage. But there’s still something missing from the discussion.

The Discussion We’re Not Having

Very often, someone higher in the hierarchy tells developers what the minimum amount of code coverage should be. For example, a manager or tech lead might tell the team that they should have a minimum of 80% branch coverage.

The person deciding this might be aware of the fact that 100% is unnecessary, so they pull a lower number out of their hat.

Developers get working and start writing tests. But to achieve this percentage, they soon realize they’ll get there faster if they test the trivial pieces of code. Simple object properties, small functions with a minimal amount of logic, and so on.

In doing so, they learn two things. The added value of automated tests is small, because they’re not writing tests for complex pieces of code. Which in turn doesn’t incentivize them to pay attention to a clean architecture of these complex pieces. The second thing they learn is that code coverage is a waste of time. And so when they later have a deciding role about code coverage, they decide not to use it.

This is a bit the current state of affairs as I experience it. Where code coverage had a brief moment of popularity, few teams I encounter now use it to their advantage. And this is chiefly because random percentages were forced upon teams in the hopes of improving the stability of our software.

Yet we can imagine a different way of working with code coverage.

Using Code Coverage Correctly

I’ve successfully used code coverage with teams who owned the tool.

First of all, it was our choice to apply the tool to our tests. In fact, we had several microservices ranging from tiny to not-really-a-microservice-anymore. We chose where to use code coverage because for some very small services that never changed, it was not worth our limited time.

Second, we chose the minimum amount of coverage. That minimum amount was basically the current amount, with some wiggle room. For example, if an application had a coverage percentage of 15%, we set the minimum to 10%. This could then be increased as we wrote more tests. The important thing was that the coverage wasn’t allowed to drop dramatically. And once we reached 75-85%, adding more coverage would be too costly for little benefit.

These pieces of code came from a startup phase where time-to-market was more important than writing tests (deliberate and prudent technical debt). But now it was time to improve the internal and external quality of our software, we had a way forward. The trend was important (increased coverage), not the exact end goal (some specific percentage).

Improving Stability

If we’re discussing code coverage, we need to discuss automated tests and why we use them. Is it for a clean architectural design? Is it to avoid bugs? Is it to allow safer refactoring?

I don’t think code coverage will help with design. But it can help for the stability of our software. But focusing on a specific number means we’re focusing on the wrong thing.

Code coverage can help us identify areas of our application that are untested. It can help us spot a dangerous downward trend. But it won’t make a big difference if the team doesn’t own it as a tool and use it to their advantage. Forcing a team to hit a certain amount of code coverage won’t help them write the correct tests. And so it won’t help them in writing stable and maintainable software.

Leave a Reply

Your email address will not be published. Required fields are marked *