During my work in a scientific project with the objective to create an “AI-based Testcode Generator”, I went through the last years of research on the integration of Artificial Intelligence (AI) into Unit Testing. In this article, I summarize my main learnings (I will publish an article about why we would want to apply AI to Software Testing at the end of November).
1. More and more attention from both research and industry
The attention to the topic literally exploded over the last few years. On the one side, more and more companies realize the pain that comes with Unit Testing, and that this pain cannot be solved by traditional automation approaches. On the other side, research institutions gain more and more experience in applying Artificial Intelligence techniques to problems related with Unit Testing. As a result, a lot of scientific platforms for AI-based Unit Testing emerged, and the number of related publications constantly increased over the last years.
2. Focus on Buzzwords
Although the interest in AI-based research literally exploded over the last few years, it has been apprechiated by a number of well-renowed researchers that terms involving Artificial Intelligence, such as AI itself, Machine Learning, or Natural Language Processing, are often used to “sell” research – either to readers, or to to funding agencies. This focus on using these buzzwords at any price forces researchers to reduce their focus on more important aspects, such as wheighing different options to find the best alternative to apply to a particular range of problems. As a result, research results are often shown to be too limited in their range of application to actually be applied to real-world projects.
3. There is no such thing as enough data
The most prominent difference between current scientific experiments and real-world software projects is the amount of data that is used. Science usually relies on a static set of often artificial data. In most cases, a toy example of one simple method is created and implemented, and then, results are verified based on this over-simplified example. In some cases, open-source projects are used. But with no effort to prove that these projects are representative for real-word projects. In practice, however, unit testing is applied to huge software projects that continuoulsy evolve and change. The amount of data (#methods, #test cases) of such projects is unimaginable huge, so results obtained from a small toy-example are usually not really applicable. This means that to make experiment results also valid for application to actual software projects, we cannot have enough data to test and verify research hypotheses against.
What is your feeling about this topic?
Is there some personal experience about AI and/or Testing that you want to share?
Do you agree with my learnings?
Let me know in the comments section below.
written by Daniel Lehne