Community Tool Survey: We need your support!
Dear MBT Community,
some weeks or even months ago we had a great idea: Let us organize a Community Tool Survey. We are confident that this could become a great event. But we need your help!
We all know such events from other domains like constraint solving or model checking. Doing something similar in the MBT context is a great opportunity to shed some light on the different domains for which MBT can be applied, and on different approaches and tool vendors to handle the challenges of these domains. However, tools are made with different domain background and this should be kept in mind when organizing such a Community Tool Survey.
What we have in mind is a survey, which essentially consists of three main steps:
- Define, within our community, a requirements specification for a problem to be solved.
- Let MBT tool vendors test this problem by using their test generators.
- Publish the results within the MBT Community for further discussion.
In the following, we present some of our questions and ideas about the contest and invite all to give us constructive feedback to our ideas.
1) Where do the specifications of the tasks to be solved come from? Do we provide a specification on our own? Do we use well-known problem descriptions which are available out there?
2) How should the specifications of the tasks to be solved look like? Should we use concrete (formal) models or textual specification? Since there are many input languages for the different tools, choosing just one modeling language would be unfair to n-1 tools. Also, not all languages support all features equally, like compositionality, nondeterminism, (a)synchronous communication, timing aspects, etc. We could also invite all participating tool vendors to submit one model from their domain, each. This would equal the chances.
3) Do we provide a system under test (SUT)? Doing so would allow us to measure metrics as code or requirements coverage, or even the number of spotted failures. This brings us to the next question:
4) How should the results of the test generation be evaluated? There are many well known means such as model coverage, code coverage (if we have an SUT available), mutation analysis on the code (again, we need an SUT) or on the model. We could also ask domain experts to create test cases based on the model manually and compare the automatically generated ones to the manually designed ones. As another indicator, we could have a look at the efficiency (e.g. number of test cases, test case length). Or is the comparison too far away from reality? What about just presenting the results and analyzing the different approaches from different domains? It would also be a good result to know what tools are suited for what domains.
We are looking forward to your input 🙂 Use the comment functionality of this blogpost or write us simply an e-mail.
Also: Spread the word about our idea! You can do it by using Facebook, Twitter, LinkedIn, e-mail, etc. etc. This project depends on your support and input. Receiving more feedback let us calibrate the idea and omit (common) pitfalls. At the end the survey is done by the community for the community!