Together with Melanie Wohnert we have written a paper about the main problems of agile testing in large-scale projects for the ICST2010 conference. Our paper was declined, but we had some very constructive comments. After rethinking the paper we corrected the paper and submitted it for the 30. GI TAV workshop. Our paper was accepted and because Melanie is working for Capgemini sd&m (co-organisation of the workshop), the PC decided to present is as a keynote.
The paper deals with the typical problems of agile testing which we have collected during some interviews with project managers and architects. Based on the identified problems we propose to use the most known agile test method: Test-Driven Development (TDD). Besides TDD it is very important to have a good understanding of what has to be tested in lower test levels like unit testing. Therefore we combine TDD with a solution called Test Aspect Design which solves several problems identified during the interviews. We will hold the talk in Munich at 17. June.
Recently we extended our paper about the tool evaluation for TDD within acceptance testing together with Marian Jureczko. We were invited to publish it in the Electrical Review, one of the oldest journals from the so called philadephia list. If accepted, then it would be my first international journal article. Here the abstract of our paper:
In the last years the software engineering community pays a strong interest in agile development methods. Those methods place software testing for example the Test-Driven Development method as an important task of the development process. Agile projects rely on good test automation tools. In this paper we evaluate five test automation tools for their usage in acceptance testing for web applications using Test-Driven Development.
The ICST conference this year was bombed by three papers of mine 🙂 My acceptance rate is 2/3! Our paper for the QuoMbAt workshop was accepted. Unfortunately I cannot present it by myself. The second paper is my dissertation paper for the PhD symposium. Here I will present my research problems and results in Paris! The good point is also, that this paper will be published in the official proceedings of the ICST!
Last week together with two s-lab researchers we have submitted a paper for the QuoMBaT 2010 workshop of the ICST conference. The main idea is to support test managers with a comparison method for model-based testing scenarios (like those described by Pretschner et al.). We use a GQM-like approach and compare six scenarios according to eight aspects. It is a first step towards the analysis of process quality for MBT. The next step would be to interpret the process quality metrics and conclude product quality metrics. Here’s the abstract:
In the literature of model-based testing (MBT), different scenarios of MBT are introduced. These scenarios mainly differ in the origin and content of test models. This difference also has an impact on the activities of MBT. For example, if developers model are directly used as test models, the efforts for test model creation are low, however the evaluation of test results will need more efforts. We investigated the differences of MBT scenarios from a test manager‟s point of view trying to answer the following questions: Which efforts must be calculated when applying the different MBT scenarios? Can we systematically analyze and compare the different scenarios in the literature? For answering these questions we propose a GQM-based approach where we systematically define metrics for MBT scenarios. By studying the related literature we characterize existing MBT approaches and compare them from test manager‟s point of view.
As our paper for the main track of the ICST 2010 conference was declined, we (the authors) decided to rewrite it according to the review comments received and submit it to the Test-Driven Development workshop of this conference. Because the abstract hasn’t changed, I won’t put it here. We tried to explain in more detail our test specification method. Further, we cited more literature in the related work part. Last, we made a clear contribution and outlook section in this paper. Hope the overall idea will convince the program committee. If not, we will publish it on a german workshop or journal.
The research on the field of agile testing, test-driven development, scrum, etc. is not my main research work. This paper is the summary of a bachelor thesis which I supervised last year.
With my collegues at Capgemini sd&m Research we have written and submitted a paper for the ICST 2010 Industrial Track. It deals with problems of agile testing (especially Test-Driven Development) in the context of large-scale projects. It summarizes our experiences on this field and gives a solution approach how to handle some typical problems with a test design method we have invented. Here’s the abstract:
Although testing nowadays increasingly gains in importance, component testing is mostly not approached systematically. Particularly in large-scale projects, the test manager often only cares about higher test levels, while component tests just “happen” during development. On the other hand, in agile software development techniques like Test-Driven Development (TDD) gain attention and promise to improve quality. However, TDD does not seem to fit for large-scale projects. At Capgemini sd&m we gathered typical testing problems as well as experiences with TDD. Based on this, we introduced a solution which combines lightweight test design by means of test aspects with TDD. In this paper we discuss the identified issues, present our experience on this field and introduce an approach for effectively controlling component tests.
Unfortunately this paper was not accepted. We got very constructive feedback. In the next weeks we will review and work those comments in our paper. Probably the next submission will be a workshop.
As my idea of what research problems I want to deal within my dissertation is clear, I wanted to discuss it with several other researchers. That’s why I submitted a short paper for a PhD Symposium of the International Conference on Software Testing (ICST) 2010 in Paris. Here is the abstract of the submitted paper:
Growing complexity of today’s software development requires new and better techniques in software testing. A promising one seems to be model-based testing. The goal is to automatically generate test artefacts from models, improve test coverage and guarantee traceability. Typical problems are test case explosion and missing reuse of design models. Our research work aims to find a solution for the mentioned problems in the area of UML and Business Information Systems. We use model transformations to automatically generate test models from manually annotated design models using a holistic view. In this paper we define and justify the research problem and present first results.
This paper was accepted! In April I will present my phd work in Paris 🙂 The feedback given by the reviewer wasn’t constructive enough.
Recently we wrote with two other researches on a paper about different scenarios identified in model-based testing. The topic is not new, but our contribution is first to reference the according approaches and tools. Second, we define several criteria to point out the difference between them. Last and most important we give proposals how managers can find the suitable scenario for their project. This paper will be published in a small german software engineering journal. Futher, we will present and discuss our paper at the GI TAV congress in Stralsund next month. Our goal is to discuss about this topic and maybe contribute with the extended paper on an international conference.
I almost forgot! Beside the paper about automatic test model generation with test ideas, I submitted another paper to the MoDeVVa 09 workshop. This is a continuation of our ideas published in EuroMicro. Unfortunately this paper was rejected. Here’s the abstract:
In higher testing levels, software testers have to validate the correctness of executable code against high-level specifications. If testers do not have detailed information about the implementation, they cannot generate test cases and prepare test environment just by means of high-level specifications. We propose a novel idea for transforming test cases generated by a model-based testing approach from high-level test case specifications into implementation-level test cases: the knowledge of design decisions is re-used to derive a transformation from high-level specifications to executable code. After presenting that idea in detail, we discuss techniques and tools for its realization.