Artificial Intelligence (AI) has revolutionized many areas over the last years. However, for the big advantage of fast automation that was not possible before, comes the tradeoff of reduced reliability. In this article, I describe why this is an issue (especially for domains like unit testing), and show a solution how we can integrate humans into an AI-based process to ensure reliability of the created outcomes.
The Reliability Problem with AI-based automation
The result of an AI technique is by definition not correct. AI algorithms take shortcuts in order to perform tasks that cannot be done by a straight-forward calculation. For meta-heuristic search algorithms, this e.g. means using an intelligent sub-set of an overall search space to obtain a reasonably good result. For Machine Learning (ML) algorithms, this means returning the most probable outcome for a certain scenario, based on historical data. In other words: The result of an AI algorithm is most probably right. But what if it is not? For certain domains, the answer to this question is not relevant. E.g. If you are going on a hike, and you use your visual recognition app to determine the name of the rare flower that you encountered, you might not worry too much if the app tells you a wrong name. If you are showing a website user a certain add because your ML algorithm tells you that she wants to buy the offered product, a wrong result simply leads to the not clicking the add. Not good for your revenue, but still no severe harm caused. However, if the consequences of a wrong decision are high, this becomes very relevant. What if you want to know the name of a mushroom to decide whether to eat it or not? One of such areas is software testing. What if you use AI-generated unit tests to verify whether your software project works as intended? Would you trust the outcome? Would you take the responsibility for the correctness of the shipped software?
As a conclusion, even if full automation is possible using AI techniques, it is definitely not desirable for certain domains (such as software testing).
Involving humans using recommender systems
As a solution to the reliability problem described above, humans have to be integrated into an AI-augmented automation process, especially in domains such as software testing. The perfect way to do this are recommender systems. A recommender system offers items to a user that this user is most likely to choose (e.g. the most efficient test cases). As a user, you can decide whether the recommended item is ok or not. This feedback (whether a recommendation was correct) can even be used to further improve the algorithm over time. So, applied to unit testing, AI-based techniques can be used to create unit tests to save time in the testing process – and using recommender systems, you as a tester can decide which of these generated test cases make sense, and by doing that, improve the recommendations to save even more time in the future.
In this article, I showed you how you can use recommender systems to overcome the realiability problem of AI-based techniques. What do you think of this? Let me know in the comments section below!
Cartoon vector created by vectorjuice – www.freepik.com
Daniel Lehner