Best Practices for Unit Testing

Every bug, its unit test

In my opinion, this is one of the most important best practices of all when it comes to maintaining large software frameworks or extensive libraries over the long term. Even though this post is mainly about how to avoid bugs in the first place through targeted testing, you won’t be able to prevent all of them. Despite all your efforts, the following still applies: even with intensive automated testing, you will never end up with zero bugs. Yes, you will have significantly fewer errors, but a few bugs will always slip through. No person and no code is perfect.

The crucial point is how you deal with these errors. Every time a bug appears, it’s a warning signal: at some point, a test case was missed. Even if the first impulse is to immediately jump into the code and fix the bug, it’s worth pausing for a moment.

Ask yourselves:

  • Why was this test missing?
  • How can I expand my test coverage so that this error is automatically caught in the future?

💡Mnemonic:

Every bug gets its own unit test. When you discover a bug, first write a unit test that fails. Only then fix the bug, and keep the unit test forever to ensure safety during refactoring.

I can recommend the following procedure:

  1. Reproduce bug: First, write a test that makes the error clearly visible and isolated. It must fail reproducibly; otherwise you haven’t really caught the bug.
  2. Fix bug: Adjust the code so that the test passes.
  3. Keep test: The test remains in the code forever. From now on, it will run with every build and prevent this error from returning unnoticed.

This approach has two advantages:

  • Errors do not return, because the test detects them immediately.
  • You document the problem. The test itself is the best proof that you have understood and solved it.

I have experienced it many times: years later a problem reappears because someone changes the code and didn’t know that this reactivated an old bug. With a unit test, that would never have happened.

In short: A fixed bug without a test is not a fix, but a pause.

Especially when you work a lot with legacy code or in large frameworks, this approach can make your work considerably easier.

Test behavior, not implementation

A common mistake: changing the code only to make it easier to test—for example, by adding extra return values or helper variables that provide no added value to the actual function.

That is an anti-pattern.

A test should check whether a function reacts correctly and not how it is structured internally.
And under no circumstances should the test force the code to “return” something that it does not actually need.


Anti-pattern: Code is twisted – just for the test

def cancellation_allowed_bad_practise(status):
    if status == "delivered":
        return False, "already delivered"  # second return value only for testing purpose
    elif status == "open":
        return True, None
    elif status == "payed":
        return True, None
    return False, "unknown status"

➡️ The function here returns additional information – not because the system needs it, but only because the test would like to have it.

This is what the tests then look like (for maximum readability, even for beginners, the use of @pytest.mark.parametrize was deliberately avoided):

def test_cancellation_permission_with_justification():
    allowed, reason = allow_cancellation_bad_practice("delivered")
    assert allowed is False
    assert reason == "already delivered"
    allowed, reason = allow_cancellation_bad_practice("open")
    assert allowed is True
    assert reason == None
    allowed, reason = allow_cancellation_bad_practice("bezahlt")
    assert allowed is True
    assert reason == None

What’s wrong with it:

  • The test checks internal justifications. What should always be tested is the outward behavior.
  • The function becomes unnecessarily complex, even though a simple True/False return value would be sufficient.
  • The second return value provides no benefit in the real code but was introduced only for the test.

Better: function does exactly what it is supposed to do

def allow_cancellation(status):
    return status in ["open", "payed"]

And the test for it?

def test_cancellation_permission():
    assert cancellation_allowed("delivered") is False
    assert cancellation_allowed("open") is True
    assert cancellation_allowed("payed") is True

➡️ Clear, simple, correct.
The test asks: “Is cancellation allowed?” – not: “Why exactly not?”


💡Mnemonic:

If you have to restructure the code just so the test can “understand” it, you’re not testing the behavior—you’re misusing the implementation.

Tests should not dictate to the code what it has to return, but should only check whether the behavior is correct. But this should not be confused with the idea that your code hardly changes once you start with TDD. On the contrary—it will change significantly over time. It will become more modular, more readable, and much more maintainable. Artificial helper variables just for testing, however, are not part of that.

Keep tests small, independent, and clear

A good unit test has three characteristics:

  • Small: It tests exactly one thing.
  • Independent: It does not depend on the state of other tests.
  • Clear: When it fails, you immediately know why.

💡Mnemonic:

A good unit test is compact, self-contained, and tells you right away whether everything is correct.

Use meaningful test names

A test name is often the first thing you or a team member see when a test fails.
It is your first—and sometimes only—documentation of what this test is actually checking.

Names like TestCase17 or FB_Test_03 are nothing more than riddles. They force you to open the code just to find out what it’s even about. That wastes time unnecessarily and makes debugging harder.

Better: The test name tells you at a glance:

  • Which functional component is being tested
  • What it is supposed to do
  • Under which condition this should apply

A possible and often practical naming scheme in TwinCAT/TcUnit is:
FB___When

But that is only one variant. Depending on the team and project, other patterns may also work well—the key is that the name is clear, unambiguous, and self-explanatory.


❌ Bad names (uninformative)

  • TestCase17
  • FB_Test_03
  • CheckStatus
  • Run1

✅ Good names (clear and descriptive)

  • FB_ConveyorControl_StartsMotor_WhenStartCommandIsTrue
  • FB_SafetyDoor_Locks_WhenMachineIsRunning
  • FB_TemperatureControl_ShutsDownHeating_AboveMaxTemperature
  • FB_RobotAxis_MovesToHomePosition_WhenResetCommandIssued
  • FB_EventManager_LogsError_WhenInvalidEventReceived

💡Mnemonic:

A good test name is like a good commit message: you immediately understand what it’s about—without any further comment.

Better to start than to wait

💡Mnemonic:

Better to start with automated unit tests today and trigger the pipeline manually than to wait months for the perfect automation.


Blog post published

in

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

en_GBEnglish (UK)