The coverage report is a lie

Having had to work without code coverage tools made me think they created beautiful code. When the opportunity finally came I was overjoyed, but let’s just say plot twists were about to unfold *very* soon after I got to work 😅

What is a coverage report anyway?

A test coverage report is a collection of stats on our production code generated each time the test suite runs. 
The stats show the number of production code lines executed by tests. The higher this number is, the more source code is insured by tests. Uncovered lines have a higher risk of change, therefore are good candidates to test next.

Sounds great! How could anyone misuse it?

Glad you asked!

The easiest trap to fall in, of which I have first hand experience, is trying to game the test-driven flow to get a high coverage percent. When focus shifts from modelling and code design, it’s likely to end up with full coverage but junky code. 

Another concern is some management sees it as proportional to the trust one can have in production code. Over-reliance on the coverage percentage gives a sense of false security. Misinterpreting coverage reports can hurt the trust in the engineering team.

Oh no! How do I fix it?

Focus on test quality rather than the quantity. Just because we have tests does not mean they are the right tests. A good unit test can tell if a change in the codebase breaks requirements.

Correct analysis of reports can guide next coverage efforts, considering the limitations and supplementing with other metrics. 

Multidimensional quality metrics are fantastic to break dependency on coverage reports. The more sources of metrics, the better informed our decisions are. 

Additional metrics can include cyclomatic complexity, tracking the number of errors relative to the number of tests and code visualising tools – and the good news is they come in other automated tools and services, relatively easy to set up and integrate in the normal workflow.

Leave a Reply

Your email address will not be published.