Model-Based Testing Community | Connecting the MBT world…



When MBT fails … Call for MBT Failure Stories

Hi all,

although we are convinced that model-based testing is the state of the art in test automation with great opportunities for quality improvement or test effort reduction, it is often hard to convince people to invest in this technology. Even teams that are convinced that this technology works find it hard to convince other teams (in the same company). We already collected several case studies that show the positive impact of introducing and applying MBT.
However, recent surveys show that only a minority of all testers apply model-based testing or even consider it a promising approach. Are we (the MBT Community) just trapped in the confirmation bias?

Now what is the problem? Like for all tools and techniques, MBT also only works if it is applied in a correct way (see Robert Binder’s comment). Are there anti-patterns or operating errors that can lead to an MBT failure? While this is theoretically possible (and also published as scientific papers), we are not aware of any real failure story of MBT. Is it because one should/can/must never publish a failure story? Or are there no failure stories? Then what is it? Technical hurdles? Tool integration issues? Modeling languages? …?

We encourage you to provide real failure stories or experienced issues about MBT. If you have some of them: What are the reasons for failure? Did you analyze it? We would be very interested in such results – even if they are somehow incomplete or anonymized … many thanks in advance. You would do the MBT Community a big favor!

Remember – the MBT Community needs you!


· · · · ·


  • Kristian Karl · August 19, 2012 at 9:06 am

    I am biased. But here’s my thoughts. I have been using MBT since 2004 in different organizations working as a consultant. In my opinion, the toughest part is not so much MBT, as the test automation itself.
    The implemtation of test automation is often hugely underestimated in terms of resources and therefore costs. It is a parallel software development project in it’s own rights, and should be considered as such.
    So, whenever MBT fails, it’s not so much MBT that failed, as the test automation itself. It would have failed with or without MBT, but maybe MBT serves as nice scapegoat.

    However, there are some specific issues regarding MBT I try to address:
    * The models are designed by testers. The test automation code is implemented by [TA] developers.
    * Avoid too big and too complex models.
    * Not using Finite State Machines, where Classification Trees would do better and vice versa.
    * Presenting test results and progress in an understandable way for the rest of the project.
    * How to use test data in models.

    But my main point: Test automation is a complicated and expensive affair. It’s so much easier doing it wrong that than right. And when doing it wrong, MBT wont help you, nor will expensive test automation tools either.


  • John Smith · November 3, 2012 at 11:45 am

    Actually Model Based Testing is both useful and complete waste of time simultaneously. It can be so simply because all testing can be thought as model based, just the model representation may be different. Thus it only comes down how is the model used in the actual testing.

    A colleague of mine once argued against modelling by essentially constructing a model and using the model to show that modelling is useless. I came to think that even though he was refuting himself he had a point. It is not that modelling is useful or useless, it is more about why bother with new more exquisite complicate formalisms and tools?

    The model based testing tools that I used encourages you to make a second implementation. Then they argue that this second implementation is absolutely correct as it is the model. Yes, it is more “simplified” but this argument suggest that the original was “complicated” and thus the original would benefit from simplification more than from testing. Furthermore, if simplicity is the key: then surely model based testing tools are the opposite: they are yet another layer of complexity. Now to write a program you not only need a compiler, you need a compiler, a development tool, a testing tool, a modelling tool, and god knows what. It only feels like no-one yelled “That’s enough!”. You can’t solve the problem of complexity by adding more complexity.

    A more foundational problem with the tools were that they were essentially taking a piece of code and producing test cases covering it, which unfortunately is the same as Halting Problem. A professor of mine long ago told that if the problem you need to solve requires exponential computational time, don’t bother with it -it is waste of time. And yet the model based testing vendors are solving a problem even hard: the Halting Problem. This if nothing is waste of time!

    I think modelling is useful and testing, as all fields of life, benefit from models but model based testing tools are waste of time. Use models but you don’t need special tools for that!


    • Stephan Weißleder · November 4, 2012 at 8:18 pm

      Hi John,

      thanks for your comment. It leaves some question marks in my head. Perhaps, we can sort this out.

      You ask for reasons to use more complicated formalisms and tools. That’s a good question. I think that the point is not about the complexity of the formalisms, but about the things that you can solve with them. About formalisms: there is constant progress in constraint solving, model checking, and all kinds of useful tools to support automated test generation. New tools partly support them and help in integrating other tools in a tool chain. So, the usefulness of formalisms and tools in out of question to me. If you don’t think so, then please clarify your statement.

      About the next part concerning the complexity of programs. Of course, you cannot reduce inherent complexity of a program. The idea here is to create a model only for the aspects that should be tested. And a model for the implementation is often far more complicated than a model for the test, because checking the correctness of a solution (or at least if it is between reasonable intervals) is easier than actually computing the solution. So, it is not about reducing the complexity of programs, but about modeling, i.e., abstracting away unnecessary things. Does this solve the mentioned issue?

      I’m not sure if I got your point about the Halting problem. In theory, it is clear that covering code can be as hard as the Halting problem. In many cases, however, it is far more simple. And even if covering 10% of the code is too hard, then automatic support for the remaining 90% is still very good. The concrete numbers are – of course – wild guesses. But there is a more important point that I see here: Model-based test generation is not focused on covering source code in the first place. It is focused on automatically producing a set of test cases that represent the behavior or structure described in the used model. And here, the complexity of the used model is in the hands of the test modeling expert.
      Of course, requirements coverage and code coverage are also important in general, but they are not the focus of MBT. For this reason, it seems like your argument including the Halting problem here is invalid.

      And about the last point: The idea is to automatically (!) derive tests from a model. And for this, you need tools. You see – this is one of the main ideas of using formalisms: being capable to apply tools that automatically produce results.

      I hope that I could make my point clear and I’m looking forward to read your reply.


  • Report from the 2nd Model Based Testing User Conference · Model-Based Testing Community · November 6, 2012 at 11:35 am

    […] stories of MBT, as called for in the previous blog post, were absent. Maybe that would be an interesting point to explicitly add to the call for […]


  • John Smith · November 11, 2012 at 4:54 pm

    You cannot simply dismiss issues with Halting Problem by saying it is invalid! It does apply mainly to the commercial (expensive) tools purporting to deliver Model Based Testing solutions. These tools require you to first write a model in an essentially Turing complete formalism and then you press a magic button and it automatically analyzes the model for you and generates tests -or in other words solves the Halting Problem. The annoying thing about Halting Problem is that you cannot solve it! You cannot even say that: “My algorithm solves the Halting Problem 90% of the time”! I dare you! Present a proof that and your algorithm would still be performing miracles! It usually is that when you give a model and there’s something the generated tests don’t cover that is precisely the part that you do want to be covered.

    The problem with MBT is that it is yet another of these IT “silver bullet” solutions. Over the past decades we have seen so many of them. The New Big Thing that changes everything and everything will be so much easier, and yet at best they provide little something that is put into the toolkit among best-practises, and at worst add work cluttering useless complexity that only makes things worse in the end. The whole name is a misnomer, mind you! Models have been used in engineering for centuries. Telling about “Model based X” is like telling that we’re doing “wood based carpentry”! Slapping a fancy word in front is how new IT trends are made but it doesn’t make it any better! I know you don’t want to accept what I’m saying and I don’t expect you to. You have vested interest, in academic career or company, in the field.

    As I’ve said before: it is not honestly a complete hogwash; models are useful, but what bothers me is that it’s not in the end as blissful as the fundamentalist Model Based Testing salesmen want us to believe. What is testing I ask you? The best example I know are the airplane pilots: every time before takeoff they have a procedure to perform. They test the meters and controls and see is there anything wrong. Do these tests guarantee that the plane will fly impeccably? Of course not! They only expose obvious fatal problems. I know you do not want to hear this, but that is what software testing is: we do rudimentary tests the purpose of which is to spot obvious flaws. We don’t do testing to guarantee that the software is flawless. This is where Model Based Testing fails, at the very worst way: You are proclaiming that people should put more effort, and more sophisticated effort, into testing than into developing. Actually, what Model Based Testing has taught me is this: If your organization finds Model Based Testing useful, you doing something very wrong! Obviously your testers are more intelligent than your developers, or your developers are incompentent! The advice then is to fire your developers and make your testers developers -If they can write clear understandable models (which are in the end programs) they are better developers.


    • Stephan Weißleder · November 14, 2012 at 7:53 am

      Dear John,

      thanks for your reply. I will just answer in the same order as you wrote the sections …

      1) I’m not saying that the Halting problem is invalid. I said that your point here is invalid. Of course, the Halting problem is everywhere around us in computer science. But is this a reason not to use any programming language lik Java, etc.? No. It also does not mean that we desist from analyzing those problems or from finding test cases that cover source code (manually or automatically). And you have to find these test cases for safety-critical implementations. No assessor would listen to reasoning about missing test cases because of the Halting problem.

      2) We all are aware of the fact that “there is no silver bullet”. And this also holds for MBT. You see – but we want to find out about the shortcomings. This was the reason to start this blog posting. And we also don’t say that one has to discard all lessons learned in testing (equivalence classes, boundary value analysis, regression testing, etc. etc.). We state that the automatic generation of test cases can help cutting down the costs for test case design and increasing quality of tests – there are several industry reports to support this and I experienced it myself several times already. And please don’t imply that we have a certain interest here and “don’t want to accept”. But the discussion has to be on solid ground. So please provide references for your statements.

      3) Now, here are many implications and assumptions – I’m not sure if I can answer all of them. First, testing is not just about finding the obvious flaws. Testing is the means to show that all requirements are implemented as expected and that no unwanted functionality is implemented. Especially in safety-critical system development, this is of major importance. For instance, have a look at the DO-178C for airborne systems. Of course, testing cannot show the absense of faults. But this has all been stated already, e.g., in the ISTQB certified tester foundation level. Next, I’m not proclaiming that one has to put more effort into testing than into development. But this depends again on the risk of unidentified faults. I cannot follow your statement about all the companies that make something very wrong by applying model-based testing. Please have a look at the mentioned experience reports. And please support your statements. I read many statements that rather seem to be wild guesses. Please give some references to support your statements. You know – this is why we started this blog post. Not just to collect statements or assumptions, but to receive real experience reports – you know: companies that really tried to apply the technology and failed and analyzed WHY.



  • Arne-Michael Toersel · November 14, 2012 at 8:58 pm

    Actually the example of aircraft pilots is a pretty good one. The sort of preflight checks these pilots conduct might be viewed as sanity checks similar to smoke testing if you want to use software test terminology. However, would you want to fly with an aircraft that is ONLY tested this way? Probably not! For a good reason there are regular maintenance tests, exterior inspections and so on, that you may compare to other software testing activities. Testing of an airplane is also endurance testing this one tiny little engine part… So, I think it is not fair to claim testing is always rudimentary. Instead,as Stephan said, it should alyways be adjusted to the risks.

    It seems to me, John, that you have a rather negative view on decades of research and practical improvements of testing methods. I would argue, that we may not be able to create software as complex as todays software without the improvements in testing methods. Of course there is no Silver Bullet, also not simply hiring more clever developers… Software faults do have various causes – carelessness of developers is surely one, but, more importantly, faults also arise from miscommunication of requirements. Different testing techniques are suited for different classes of faults.

    I am not sure I can contribute further to the halting problem discussion…


  • John Smith · November 15, 2012 at 9:49 pm

    I have written provocatively, yes, and I do admit the methods in some areas of Model Based Testing are useful, but I dislike the mentality that it seems to inspire: That you can take any piece of garbage and by testing make it better.

    I have explained to you how I feel Model Based Testing can fail. I cannot give you reference on that as, well, I work in a place where Model Based Testing is a religion for some, and for some the decided strategy which will not be admitted to have failed in any way until the deciding managers have moved on. This is why I write in a pseudonym, or I risk losing my job. You may have the luxury of speaking openly but I do not.

    Here is one example (the actual details and names cannot be told to protect the guilty): We had a system with a code base almost a million lines of really bad code. Code that was written by inexperienced developers years ago. The original developers have then either left the company, or were now managers. No-one was really happy maintaining it, but it was a necessity as practically everything else depended upon it. Thus we applied Model Based Testing on this system, which at first went very well. The models on the system were very insightful and the tests we applied revealed numerous faults, which was not a real surprise: Everyone knew it was garbage. Once we were in this stage we assessed the situation. We had a piece of garbage system full of holes, and a very nice set of models of it. Everyone hated the system and liked the models. Then we could see the obvious conclusion: The models are actually a pretty good description of what the system should be, but instead of making the system as it should be we have this image of the system and we then need to still waddle through and maintain the original, while we could have just as well rewritten the thing from scratch in the first place for the same effort. The system still stands as a mocery to engineering -We know what it should be alike, the model, but all we can see and work with everyday is the heap of filth it actually is.

    You can choose to gloss over my concerns if you wish. Let us just settle that you can never convert me into a believer of your faith, and I let you keep yours.


    • Stephan Weißleder · November 18, 2012 at 10:14 pm

      Dear John (or whoever you really are),

      It is sad that you cannot tell more details about your experienced situations. We expected already that telling negative stories is not welcome at management levels. But this does not keep us from asking, anyway. 😉 Obviously, your experiences seem to leave an impression on you. I cannot find another explanation for why you (repeatedly) assume that we gloss over your concerns. :/

      So just to get it right: Your concern is about the impression that MBT brings salvation to all problems as some marketing guys surely want people to believe?

      It sounds like the issue we are talking about are wrong (?) management decisions that resulted in automating test design instead of rewriting the whole system?
      Of course, managers can always be wrong or right – but I cannot see the connection to MBT here. MBT is just a tool to help in system engineering. You also said that MBT helped in finding “numerous faults”. So this is a good result for MBT, isn’t it? Especially if these faults (as obvious as they may be) were not detected before! It may be that reimplementing the system might be the better choice. But please keep in mind that the new system also has to be tested, is going to contain faults, has to be managed, etc. – and typically, the effort for (re-)creating a system is underestimated. So what makes you so sure that this is the better option?

      Stephan (the real one 😉 )


      • John Smith · November 24, 2012 at 8:31 am

        One of my concerns is that Model Based Testing is in ways the opposite of scientific practice. In physics we make models of the reality, then run experiments, and if the experiments show that the model does not correspond to the reality we must revise the model. In MBT people do just about the opposite: When the model and the system do not correspond you want to change the system.

        MBT seems to be an ideal job for a Cowboy Coder: You can ride in, write something from scratch, wave hands how it works with no need to make it do anything in practice, and ride off into the sunset and leave the maintenance to someone else. The model doesn’t need to worry about pesky details such as robustness, presenting information to the user, networks, or persistence. It is abstract after all. I know that writing new things is fun, and maintaining old things is boring, but that’s what software engineering is.

        If we reimplement the system we do need to test and verify it somehow, but what about the model? The validity of the model basically comes from god. The model by design is something you cannot really test. They do insist that, yes well, you see models are much more simpler representations of parts of the system, but we have that too in implementation as well. It is call modularity. A complex system should be designed from modular simple, understandable and verifiable components. If it is not it is a clear sign that it should be reimplemented, and thus if you find MBT useful because of this, you should rather consider reimplentation!

        Mr. Toersel mentioned that we would not have been able to write such complex systems as we have today without advanced tools. But is that good thing? A lot of the complexity we see is unnecessary. Maybe most of the useless complexity exists because people have not bothered with it when there are tools that are supposed to help with it. I think that only goes so far: eventually the combined complexity of the system and the tools you need to maintain it grows beyond what you can manage!


        • Stephan Weißleder · November 27, 2012 at 10:35 am

          Hi John (or whoever you are),

          your comments are really not about failure stories of model-based testing, but just wild guesses (as I wrote before). Perhaps, we should start a FAQ-section and discuss your questions there…

          Just a few short answers:
          part 1: The idea is to invent a system that corresponds to requirements and not just ANY (!) system. You know – there is no sense in declaring the current version of the system as “reality” and demand to form models that are conform to the system instead of how he system should be. That’s the basic idea of testing! I wonder how you could bring this up – but this may be a good idea for the FAQ section…

          part 2: you can also build models for robustness and all the other things you mentioned. Of course, one always has to keep in mind that automation is not always the best solution. The rest of this part is just guessing from your side …

          part 3: Of course, the model does not fall from the sky and its validity is not given by god. You know – there are things called engineering processes. And these processes can also be applied to models. And – of course – you can test these models, e.g., by simulating, model checking, reviewing, etc. … the basis should always be the requirements.



          • John Smith · November 27, 2012 at 7:49 pm

            Fine! Everything I say is “wild guesses” and everything you say is the truth. I don’t want to argue about that. Obviously you are more educated and have a fine degree on the subject, and I am only a simple ignorant engineer who doesn’t understand your ingenious methods.

            You know that sounds exactly like the last discussion I had with our lead model based tester: He nearly beat me to death with a book on abstract model based testing -I learned not to talk with the people anymore unless absolutely necessary.

          • Stephan Weißleder · November 27, 2012 at 9:03 pm

            Come on, don’t be that easily offended. You also use unfounded and partly offending comparisons – now don’t expect me to be fair and neutral all the time. 😉

            And seriously – the previous comments are most of the time not about MBT, but about requirements engineering, basic aspects of modeling (yes, you can build models for everything – also for robustness, etc.), and model validation, which is neither new nor a direct result of MBT. So why did your lead model based tester use a book on MBT?

  • Arne-Michael Toersel · November 27, 2012 at 9:32 pm

    Dear John,

    I can not really follow your argumentation. For example, you state that modularization is a solution to tackle complexity. Agreed, but I would argue again that this is not sufficient. You may have extensively tested components that still misbehave when being integrated, just because their interfaces do not match. Also, all those nicely tested and verified components can still form a system which is not adequate with respect to the user requirements because the requirements where not properly translated to component specifications. Or the requirements where not captured properly in the beginning. This MIGHT be a point for using models, because they may serve as communication and clarification tool. If we can deduce test cases from these models that check whether the system conforms to this base of common understanding, why shouldn`t we do it?

    As I read your posts, the proper answer to software quality problems is simply to build better software. But this is WHAT we should do – I would be eager to hear more about HOW we should do it in your opinion. Which testing techniques do you prefer? Which tools do you think are worth using and which tools just add useless complexity? Do you think modern IDE`s are useful or not? What about high level programming languages?

    Best regards,


Leave a Reply



Theme Design by
© by MBT Community 2011