Last week, Alex Knapp, a general technology blogger for Forbes.com, ran a short article on model-based testing.
I took a fair bit of issue with the article, and called him on it. I must say, I was impressed with Alex's response.
First, he followed up his summary post with an interview with a little more depth. Second, the guy called me to dialogue, in a friendly way.
I'm still not impressed with the original piece, and have issues with the interview. What impressed me was the follow-up, the genuine interest to figure out the truth, the willingness to consider both sides of the discussion. As a general-interest "tech" blogger, Mr. Knapp didn't have a deep understanding of testing when he began the process ... but I have the impression he might when he finishes.
Anyway, after posting the interview, he asked me for my feedback, and I gave it over email. Afterwords, we kept talking, and thought it might be worth sharing with, well, everybody else. So here goes ... my reply to the latest interview:
As a tester, I run into this idea all the time -- that we can automate away testing. It seems like every year, a new crop of students graduats from CMU, Berkely, and MIT with CS degrees. (Only thing is: They haven't studied testing.)
What computer scientists do, of course, is write programs to automate business processes. So it makes sense that someone with a CS degree would say "hey, testing, there's a straightforward business process -- we should automate it!"
I do want to give Mr. Bijl's some credit for this strategy -- model-based testing is a more complete, more cost-effective way to test applications than traditional, "linear" test automation.
It's also not new -- Harry Robinson has been championing the idea for going on a decade. You might even check out his website -- Harry has worked at Google, Microsoft, AT&T. He is currently BACK at microsoft on the bing team. Really good guy.
What impresses me about Harry is that he is realistic in what model-driven testing can do.
For example, let's look at Mr. Bijl's rhetoric one more time:
"It enables to automate all phases of software testing: test creation, test execution, and validation of test outcome."
If that were true, then he would basically develop a BUTTON, right? You'd type in a URL and click "test it" and then get test results.
Of course, this can't possibly work. Sure, you could write software to go the URL, look for input fields, type in random inputs and click random buttons. You could get back 404 errors from broken links and such, but, most importantly, the tester software wouldn't know what the tested software should do, so it would have no way to evaluate correctness. Whether it's a simple Fahrenheit-to-Celsius converter or Amazon.com, either way, you need to embed business rules into the test program to predict what the right answer is, and to compare the given answer to "correct."
In software testing, we call this the "Oracle" problem.
That "oracle" is the "model" in model-based testing. Someone still has to program it.
Once you "just" program the model, then you can set your application free on the website, to bounce around amazon.com, sending valid input and looking for errors.
The problem is that little term "just." It turns out that, in most cases, programming a model is exceedingly complex. (Google "The unbearable lightness of model based testing".) Oh, I've done a fair bit of it -- for example, if you have some complex business rules in a database, and need to predict an "answer" to a question for a given userID, you might have two programmers code the application, then compare results with a FOR loop. I have done this.
For more complex applications, especially ones with a GUI, the number of states to transition begins to grow exponentially. Most people applying MBT generally "give up" and use it find errors, because errors codes are easy to predict. The problem being, this doesn't tell you on cases where no error is generated but the business logic is incorrect.
I don't mean to be too critical of MBT -- it's a good tool for your toolbox. Presenting it as the solution to all testing woes, well, yes, I take issue with that. If you'd like to do an interview with Forbes, or moderate a point-counterpoint or such, I'd be interested in it. (I know you are a generalist, maybe Forbes could use a test/dev specialist blogger?)
I'll be at the Conference for the Association for Software Testing (CAST 2011) in August in Seattle, and the Software Test Professionals Conference (STPCon 2011) in October in Dallas. I'm happy to talk more about this.
Here we have an interesting topic, a receptive audience, and the capability to cause a little bit of change. I don't know about you, but I a more than a little tired to the every-batch-of-CS-grads-sees-testing-as-something-to-code-up mentality.
I took a shot. The audience seems receptive; I even proposed a point-counterpoint interview as a next step. Does anyone else have an idea on how to keep the ball rolling?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Wednesday, June 01, 2011
Subscribe to:
Post Comments (Atom)
5 comments:
Thanks for posting this, Matt - your thoughts very much echo mine after reading the followup interview. I think the way to stop the cycle of new grads trying to automate away testing involves more awareness of software testing for the faculty teaching those grads. Bringing more awaness of the complexities and social science aspects of testing will hopefully shift the focus towards finding the balance between automation and human-driven testing. Unfortunately, I don't have any ideas for new ways to do this at the moment.
Doesn't every CS grad want to solve the halting problem, or was that only mé :-)
Nice Matt. Keep on them about this. I read the blogs (both) after seeing Michael Bolton's tweets reflected on LinkedIn.
My reaction was the same at first too, and I just put it away as the blind leading the blind. And that is also part of the problem.
You get journalists (or bloggers as quasi-journalists) who don't do enough research and fact checking before they shoot off their mouths. Price we pay for instant gratification; going off half-cocked as my father would say. And the damage done is that some unknowing business person will read this and then assume MBT is the end all solution. They want the "Automagic".
Well as I'm fond to say "It's Automation, Not Automagic!" Enough said before I go off on an even longer rant.
Again, good job and keep on them. I'll be following to see how it goes. Best of luck with it.
Jim Hazen
Much as it pains me to say, I agree with Matt. In my academic experience little time wise was devoted to testing. Even in so called software engineering courses, the chapter on testing was quickly brushed over. Some CS courses may teach unit testing, or test driven design/development, but I feel it really takes experience to understand how weak an undergrad's understanding of testing may be fresh out of school. The question I always ask though, is how do we fix this problem? Andy suggests more knowledge in the faculty. I'm not sure knowledge alone is an answer. Institutions put a lot of emphasis on memorization, a lot of emphasis on projects. Unless a 'testing' reuirement is added to CS an CpE Engineering degrees I don't see more knowledge in the faculty alone to be an issue, even as they continue to spit out the structured design paradigm, and Water Fall Methodologies.
I'm a bit late at the party, sorry but I just ran into this blog.
I am Machiel van der Bijl from Alex's article. And I agree and disagree. The original mailing was sent by the communication department of Twente University and all they try to do is get attention. And I have to give it to them, they did a great job.
Does that mean that model-based testing is only rethoric, no.
We've been helping clients in solving real software construction issues at Axini the last 5 years. People pay money for it, so I think MBT is more than rethoric.
Our claim is that MBT, the way we do it, is the first test-automation approach that can thoroughly test software in an economically viable way.
And as always, we are more than willing to shed some more light on the topic if someone is interested. We're only one email away :)
Post a Comment