Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Friday, October 10, 2008

When should a test be automated - I

I stand behind my last post on the Holy Grail, but it was often mis-interpreted as "no test automation."

Now, certainly, that's silly. At Socialtext, we use all kinds of tools to assist in our testing, some of them traditional run-capture-compare tools, others setup, bug tracking, grep, analysis, summary ... the list goes on.

That leads to the question "when should a test be automated" - of course that's a little bit silly, as having a computer look at one field, and having a human look at an entire window are two different things, but I do believe that it might be more helpful if I actually explored the area and provided some tangible advice.

To start with, let's take a look at Brian Marick's Article:

"When Should A Test Be Automated", that he wrote in 1998

I will use that as a jumping-off point. So take a good look, and tell me what you think.

More to come.

3 comments:

Lisa said...

I've lived by Brian's article since it came out. Still relevant, although now some of us have a whole development team to help us automate. My personal rule: If your team isn't expert at automating, start with the assumption that you will automate this test. Then as you get into it if you find it doesn't make sense, don't. Otherwise, you will keep shying away from automating due to fear of unknown and steepness of learning curve.

Raoul Duke said...

i can't completely either agree or disagree. but one thing that sticks in my craw is:

"My measure of cost - bugs probably foregone - may seem somewhat odd. People usually
measure the cost of automation as the time spent doing it. I use this measure because the
point of automating a test is to find more bugs by rerunning it. Bugs are the value of
automation, so the cost should be measured the same way."

that seems to be missing the point of regression testing. i've been on too many projects where closed bugs have to be reopened later, such that the total cost of that bug has gone way up; had regression flagged it as soon as it was reintroduced, things would be cheaper. that's one particular cost trade off i believe to be important.

AgileTester said...

Here's a post describing Brian's current thinking (which currently seems to favor exploratory testing over comprehensive business facing tests)

http://www.exampler.com/blog/2008/03/23/an-alternative-to-business-facing-tdd/