About 4 p.m. today morning, I have submitted a paper with some other researchers to a MODELS workshop called MoDeVVa. However there are several open questions, the main idea is IMO very innovative. I hope the program commitee also thinks so… Here’s the abstract of this paper:
Model-Based Testing is slowly becoming the next level of software testing. It promises higher quality, better coverage and efficient change management. MBT shows two main problems of modeling the test behavior. While modeling test cases test designers rewrite most of the system specification. Further, the number of test cases generated by modern tools is often not feasible. In practice, both problems are not solved. Assuming that the functional design is based on models, we show how to use them for software testing. With so-called test ideas, we propose a way to manually select and automatically transform the relevant parts of the design model into a basic test model that can be used for test case generation. We give an example and discuss the potentials for tool support.
This paper was accepted! Unfortunately I can’t travel to Denver, because some other researchers are already there and their publications were accepted for the main conference track. However I’m very happy!
Within my work in the industry, I have discovered that there are several benefits of using models for testing. Models can be used for test design, but also for test analysis. A first small approach is handling with the generation of a so called test basis list. I have submitted this paper to the CONQUEST conference (see http://www.isqi.org/konferenzen/conquest/2009/). Unfortunately it was declined. Here is the abstract of the paper:
One of the best quality assurance methods in software engineering projects is testing. Modern methods like model-based engineering promises higher quality and lower costs in long-term projects. As testers are often victims of the ad hoc nature of the development process, they need quick-wins to be convinced of the usage of models. At Capgemini sd&m Research we have developed several methods and tools for model driven engineering. In the domain of software testing, we use models for specification-based testing. The manual process of identifying the test basis is time consuming. To support the test process we propose a fully automated approach for generating a so called test basis list.
Together with a polish researcher from Wroclaw, I examinated some test automation frameworks and their usage in testing web applications. The interesting point is, that we have shown how those tools can be used with test-driven development. Our paper was accepted for the polish conference on software engineering KKIO (see http://kkio09.ee.pw.edu.pl/). Here is the POLISH abstract of our paper:
Rozdział prezentuje analizę porównawczą wybranych, darmowych narzędzi umożliwiających tworzenie zautomatyzowanych funkcjonalnych testów akceptacyjnych, czyli Fitnesse, STF, JFCUnit oraz Selenium. Badana jest możliwość stosowania tych narzędzi do testowania aplikacji internetowych wytwarzanych w procesie opierającym się o podejście programowania przez testy. Aplikacje internetowe są bardzo specyficzną grupą aplikacji. Do ich uruchomienia potrzeba serwera aplikacji. W związku z tym nie każda aplikacja nadaje się do przeprowadzania na nich testów akceptacyjnych. Dodatkowe komplikacje pojawiają się, jeżeli wymagać, aby testy powstały przed napisaniem kodu źródłowego, co ma miejsce w przypadku programowaniu przez testy.
Recently I had the ability to work with some other researchers on a publication about system testing in the model-based context. This paper was accepted as a short paper within the IEEE Euromicro conference in Greece (see http://seaa2009.vtt.fi/mde/). Here the abstract:
In system testing the system under test (SUT) is tested against high-level requirements which are captured at early phases of the development process. Logical test cases developed from these requirements must be translated to executable test cases by augmenting them with implementation details. If manually done these activities are error-prone and tedious. In this paper we introduce a model-based approach for system testing where we generate first logical test cases from use case diagrams which are partially formalized by visual contracts, and then we transform these to executable test cases using model transformation. We derive model transformation rules from the design decisions of developers.