Here are some thoughts that I wanted to put out in the world on why we we currently fail to implement unit testing in practice, and why we should learn how to implement it right before actually writing a single test case.
Unit Testing – Theory VS Practice
“We shipped the product, but did we ever write unit tests?”
“Unit Tests? You’re right, we forgot them, but now there’s no time any more”
“Our Unit Tests, well, we usually don’t talk about this”
“We could be faster if we just wouldn’t have to write unit tests any more”
“Start writing tests, we have to beat the other team in code coverage”
“Why do we still not have 100 % test coverage?”
These are all statements I’ve already heard when it comes to unit testing.
In theory, unit testing is awesome. It can help developers (i) understand the nature of a problem by thinking of it from a different perspective, and thereby (ii) spend less time on writing code that has to be deleted afterwards because it does not meet what a client wants. Unit Testing enables a developer to (iii) verify that the created implementation staisfies requirements, and thereby (iv) save time for finding defects, and (v) spend more time on writing code.
In practice, unit tests do not exist, or if unit tests are present, they are usually outdated, incomplete, or do not even compile.
How to fail at implementing Unit Testing in an organization
In my opinion, this divergence between theory and practice when it comes to unit testing is because there are some assumptions required in order to effectively use unit testing. In practice, these assumptions are usually wrongly understood and implemented, which leads to a state where unit testing is seen rather as a burden than an efficiency booster for developers.
In the following, I summarize the most important assumptions, and describe how they are usually failed to be implemented in practice.
Theoretical Assumptions
- Someone else than the developer creates the tests, so that wrong assumptions are also tested.
- Code Coverage is used to evaluate test cases and allow (i) analysis of the current state of testing, as well as (ii) comparison of unit testing effort among different projects.
- Unit Tests are updated as soon as the tested code unit changes. As a result, all of the the unit tests of a project should correctly run through before any adaptation to a system is commited. This ensures that re-running test cases after code changes intuitively show you (i) which parts of a system have been affected, and (ii) whether some unintended side-effects occured.
Practical Implementation
- The developer creates Unit Tests to satisfy requirements given from management (project management or line management).
- Code Coverage is used to evaluate the “success” of testing. As a result, developers trim their tests to achieve maximum coverage.
- Failing tests are ignored instead of updated and maintained. This pretty fast leads to a state in which re-running unit tests does not make any sense, because outdated unit tests do not give any information about the correctness of the implemented software.
So in general, this wrong implementation kills most of the benefits of unit testing, leading to a state in which the effort for creating tests definitely outweighs its costs.
What do you think about these ideas? Let me know in the comments section below.
written by Daniel Lehner
Daniel Lehner