Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Saturday, July 14, 2007

Test Automation - IV

Reads of Creation Chaos has left some amazing comments on the previous post; if you haven't read them, please take a gander.

First off, I agree with Shrini that "regression" has too many definitions, and we get confused by it's use. I think that most of the time, today, when people say "regression tests", they mean what Shrini calls type II regression - "Make sure stuff that works yesterday still works."

Yet, sadly, even with automated regression tests hooked up to a CI server, you still don't get that! All you get is "The tests that passed yesterday still passed today." Or, as I prefer to say it "Automated tests give you *some confidence* that no *big* defects were injected in the code since the last build."

For the most past, I've been writing about scripted test automation. For example, If we had a GetNextDate() function, we could pick two dozen different dates and run them again and again. Of course, if something breaks on a date that is not one of those twenty-four, the automated tests won't fix it.

That's where model driven tests can help. For example, instead of twenty-four pre-recorded tests, the software could pick a random date between 100BC and 2500AD, then call GetNextDate(), then pop up Microsoft Excel and ask for the date plus one - then compare the results. This can work as long as you have something like excel to trust. (Cem Kaner calls this "High Volume Test Automation.")

Another way to do it is to have a separate programmer write his or her own GetNextDate(), then pick random numbers and compare them. A few challenges with this -

1) In this case, you're literally coding it twice. It will literally take twice as long to develop using this approach.
2) If the two developers make the same mistake (which is likely - think about leap years) the two programs will work "just fine."
3) If the requirements are vague or wrong (how often does that happen?) the software could do what the developers expect but not what the customers want.

So here's my conclusions ...

A) Automate unit tests for simple bounds and equivalence classes

B) If you produce a single output, then a simple automation regression test is possible. ("If yesterday's output the same as today’s?") This will enable refactoring and diffing the two can show new functionality.

C) Documented acceptance tests prevent the "gee, I didn't mean that, I meant this"
phenomenon and get the customers involved as part of the team. Automating those can help with communication and be a formal specification - but it might not add a lot of value in terms of finding bugs.

D) Model-Driven tests have a lot of promise for finding those quirky odd bugs, especially in the GUI. But ...

E) When it comes to the "This just doesn't look right" kind of bugs, you'll probably want exploratory testing.

But that's just me talkin'. What do you think?

No comments: