In August 2020, the second AiTest conference took place, virtually hosted by Oxford University in the UK (http://ieeeaitests.com/). The focus of this conference is to showcase the current state of (i) how Artificial Intelligence can be used to improve Software Testing techniques and processes, as well as (ii) how techniques from Software Testing can be applied to Artificial Intelligence systems.
Automation requires Control.
A lot of current research focuses on automation of some parts of the testing process. This seems obvious, as the benefits of automation are easy to understand. Most approaches in this area thereby automate the test generation process (which seams like the low-hanging fruit).
However, results indicate that these automation attempts naturally reduce the quality of the automated process steps (so the tradeoff is between
Therefore, a lot of research already focuses on tools to allow analysis and control of the quality of automated steps and generated artifacts.
Let me explain this using the example of test case creation.
If you automatically generate test cases and corresponding code, you easily see that this saves you time (you do not have to implement them yourself). However, as AI cannot be 100 % accurate, this means that not all of the generated test cases do actually make sense. This means that you cannot rely 100 % of the results that generated test cases yield. Manually reviewing these test cases might take more time than implementing them in the first place. Therefore, you need a way to judge the quality of your test cases, to focus the reviewing of those who are most probably wrong, eventually detecting them automatically. This is what a lot of research is currently trying to achieve.
What is your opinion on this message?
Do you agree?
Are you interesting in more posts containing such insights?
Let me know in the comments section below.