In my three years as a consultant in software test automation for c#, I’ve written a lot of test cases.
These are my top 5 learnings.
1. A failing test doesn’t necessarily mean that you found a bug in the tested system.
In fact, there are 3 sources for test failures. (1) Most often, a test fails because requirements a test case is based on are either outdated or incomplete. (2) This reason is followed by test failures because the implementation of the test case itself is wrong. (3) And only on third place, the test case fails because there is an (objectively verifyable) defect in the tested system.
2. The lower the quality of the requirements, the higher the effort for creating (and maintaining) test cases.
Test cases are (usually) created based on requirements specifications. However, requirements often contain ambiguous, inconsistent or incomplete information. The more such issues are present in requirements, the more effort is required to get information that test cases can be created on, or to identify whether a test failure is due to wrong requirements, or a defect in the system implementation. Additionally, when requirements are not continuously and consistently updated, maintaining test cases after requirements updates becomes very tedious and time-consuming.
3. 80 % of effort, but also value, is in test maintenance.
“Create once, run always” is the slogan often used by managers to sell test automation effort. Although it is true that 80 % of the value is created after the first execution of an automated test case, what is neglected by this slogan is that also 80 % of the effort for automated tests is created after this first execution. (1) Test cases must be adapted after requirements are changed. (2) Structural updates of the tested system (that do not necessarily require updates in requirements specifications) might potentially break test cases (e.g. changed API specifications, or changed identifiers for frontend elements) (3) Regular refactorings are required to keep the test base clean (and allow adding test cases efficiently). Neglecting these efforts usually leads to a state where most automated test cases cannot be relied on, or do not even compile, because there is no time and money reserved for keeping them up to date. This state again kills 80 % of the value of these test cases.
4. Automated Frontend Tests never work.
In my experience, the effort of automated frontend tests is so high that there is usually seldomly a point in time where they actually work for a whole project. (1) complex mechanisms, e.g. for login or session timeouts, lead to a lot of effort for setting up a test project. (2) Calling individual objects is often a problem when creating individual test cases. (3) And then, frontends usually change in each software iteration, which potentially breaks all existing test cases (so, start again in step 2).
5. Never trust a developer that says “I’ve tested already“.
As developers, we believe in ourselves and the code we create. This is why we usually underestimate the testing effort that is required to objectively check whether our code actually does what it is intended to. Not just once, as a tester, I got “already tested” software in which I could not login, or where the main page did not even load 😉
I would love to further discuss these experiences with you in the comments section below.
written by Daniel Lehner
Daniel Lehner