“Model-based Visual GUI Testing: An Industry-Academia Research Collaboration & Automated Dependency Detection Between Test Cases Using Machine Learning“
Vi på ADDQ är med i ett forskningsprojekt om MBT och test och vi har nu nöjet att presentera Dr. Emil Alégroth och Professor Michael Felderer, som båda är en del av projektet, på nästa frukostseminarium.
Model-based Visual GUI Testing: An Industry-Academia Research Collaboration
Speaker: Dr. Emil Alégroth, BTH
The quality of software testing, especially automated testing, relies on a delicate balance between the tester’s domain and technical knowledge. Hence, testers with domain knowledge (e.g. end users) may not have the technical knowledge how to automate tests whilst technically skilled testers (e.g. programmers) may not know how the system will be used nor how to make valid tests.
Another challenge with testing is associated with achieving test coverage of the system under test (SUT). Writing, and later maintaining, a large set of test cases is not only complex but also costly. As such, being able to generate concrete test cases from abstract input data is perceived a favourable capability.
Model-based testing (MBT) is perceived to hold these abilities. First, MBT allows non-technical users to model the SUT’s behaviour that allows technically skilled users to implement how the tests are performed. Further, as models can be traversed in many different ways, a single model can generate many different test scenarios by combining nodes and edges in longer or shorter paths.
But what is MBT and how does it work? In this presentation, we will take a closer look at the technique, how/when it can be used in practice and how to perform it with the tool Graphwalker, in particular, how the tool can be used to perform GUI-based automated testing.
Automated Dependency Detection Between Test Cases Using Machine Learning
Speaker: Prof. Michael Felderer, BTH
Knowing about dependencies and similarities between test cases is beneficial for prioritizing them for cost-effective test execution. This holds especially true for the time consuming, manual execution of system-level test cases written in natural language. Test case dependencies are typically derived from requirements and design artifacts. However, such artifacts are not always available, and the derivation process can be very time-consuming.
In this presentation, we propose, apply and evaluate a novel approach that derives test cases’ similarities and functional dependencies directly from the test specification documents written in natural language, without requiring any other data source. Our approach uses an implementation of the Doc2Vec algorithm to detect text-semantic similarities between test cases and then groups them using two different clustering algorithms. The correlation between test case text-semantic similarities and their functional dependencies is evaluated in the context of an industrial onboard train control system.
Testers and QA engineers seeking knowledge and tools how to automate testing or testers/developers/managers interested in Model-based engineering (MBE) or MBT.