Alright, so today I’m gonna spill the beans on my adventures with ‘coverage 3’. Buckle up, it’s gonna be a bumpy ride!

First off, I heard about this ‘coverage’ thing, right? Basically, it’s all about figuring out how much of your code is actually being tested. Sounds kinda boring, but it’s super important if you wanna make sure your app doesn’t explode in someone’s face. So, I decided to dive in. Started with the basics – installing the ‘coverage’ package. Just a simple pip install coverage
. Easy peasy.
Then, I had to figure out how to actually use the thing. I had this little Python project lying around, nothing fancy, just a few functions and classes. I ran my tests like usual (using pytest
, because who doesn’t?), but this time I preprended it with the command coverage run
. So it looked something like coverage run pytest
. Bam! It ran my tests like normal, but behind the scenes, ‘coverage’ was keeping track of which lines of code were being executed.
After the tests finished, I was like, “Okay, cool, now what?”. That’s where the magic happens. I ran coverage report
. This command spits out a nice little table showing you each file in your project, how many lines of code are in it, how many were executed by your tests, and the percentage of coverage. It was actually pretty shocking. Some files were like 100% covered (yay!), but others were hovering around 50% (uh oh!). I realized I was missing a whole bunch of test cases.
The real game-changer was the coverage html
command. This generates a whole bunch of HTML files that you can open in your browser. It shows you your source code, and it highlights the lines that weren’t executed by your tests in red. Red is bad, obviously. This made it super easy to see exactly which parts of my code needed more love.
So, I spent the next few hours writing more tests, focusing on the red lines. Ran coverage run pytest
again, then coverage report
, and coverage html
. Rinse and repeat. Slowly but surely, the red lines started disappearing, and the overall coverage percentage went up. Felt like playing a game, trying to get a high score!

One thing that tripped me up a bit was dealing with code that’s intentionally not executed in tests. Like, maybe you have some error handling code that only gets triggered under very specific circumstances. You don’t necessarily want to write a test to trigger that error every single time. That’s where # pragma: no cover
comes in handy. You can add this comment to a line of code, and ‘coverage’ will ignore it. Just be careful not to abuse it!
Another gotcha was dealing with configuration files. ‘Coverage’ uses a .coveragerc
file to configure its behavior. You can use it to exclude certain files or directories from coverage reporting, or to change other settings. I didn’t really need to mess with it too much, but it’s good to know it’s there.
In the end, I managed to get my project up to a pretty respectable coverage level. I’m not aiming for 100% (that’s often unrealistic and not necessarily the best use of time), but I feel a lot more confident that my code is actually being tested properly. Plus, I learned a ton about my own code in the process. Definitely worth the effort!
Here’s a quick recap of the commands I used:
pip install coverage
coverage run pytest
coverage report
coverage html
And remember, # pragma: no cover
is your friend (but use it wisely!).

That’s all folks! Hope this helps someone else on their ‘coverage’ journey.