Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Tuesday, December 11, 2007

Testing Philosophy II -

About every four months, Shrini Kulkarni convinces me to drop the term "Test Automation" from my vocabulary. After all, testing is a creative, thinking process. We can automate some steps of what we are doing, speeding up repetitive work - but those don't turn out to be the investigative, critical thinking steps(*).

Still, I want to use a word to describe when I use automation to go faster. I use awkward terms like "automating the repetitive steps" or "build verification tests" for a few months, until I end up calling it test automation. Then Shrini emails me, and the cycle repeats.

Since I am on the down-stroke of one of those cycles, I thought it would be appropriate to link to an old paper of Brian Marick's:

I want to automate as many tests as I can. I’m not comfortable running a test only once. What if a programmer then changes the code and introduces a bug? What if I don’t catch that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?

Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be cost effective. It took me a long time, but I finally realized that I was over-automating, that only some of the tests I created should be automated. Some of the tests I was automating not only did not find bugs when they were rerun, they had no significant prospect of doing so. Automating them was not a rational decision.

The question, then, is how to make a rational decision. When I take a job as a contract tester, I typically design a series of tests for some product feature. For each of them, I need to decide whether that particular test should be automated. This paper describes how I think about the trade offs.


The paper is "When Should A Test Be Automated?" - and it is available on the web.

Now, that isn't exactly my philosophy, but I think it's good readin'.

Speaking of good readin'; if you are doing test automation by writing code in a true programming language to hit a GUI, you might enjoy this classic by Michael Hunter. (If you are not, it's still an interesting read, but everything after about page 12 is very specific about writing code libraries for application 'driving'cripting.)

And now, Shrini, I'm back on the up cycle. Really. I promise.


--heusser
(*) - It doesn't help that a lot of "test automation", especially in the 1990's, were snake oil products, designed by marketeers, to show how testing could be "sped up" with massive ROI numbers. I can't tell you how many boxes of GUI test automation software I have seen sitting, unused, on shelves. Lots - and even more than that are the stories I've heard from colleagues.

When I talk about test automation, that's not what I mean - see philosophy I for a better description.

3 comments:

D. D. said...

I wonder about that distinction frequently. It reminds of one of your previous posts, "You Keep Using that Word..." and James Bach's recent post on Methodology Debates; automation is just a bad word to use in mixed crowds. It's too easily misunderstand. For some people it is a technique for performing certain kinds of tests; for some it is an ideal goal. Is it a "means" or an "end", in other words.

I agree with you in that automation should not be an "end". A tester should test like Jackie Chan fights: using whatever means is necessary to get the job done, whether it's automation or exploration (or a belt or a broom handle).

Shrini Kulkarni said...

A small correction Matt, my concern is with the word "Automated Testing" and relatively "Test Automation" is more acceptable word, I believe.

One possible interpretation is -
In "automated testing" - testing appears to be in focus that too in automated way. In "Test Automation" - Automation seems to be in focus for test related activities.

I would love to see people start using "Computer assisted Testing".

In reality, What people call as automated testing, is (mainly) automated test execution. That is distinction I would like to emphasise.

Shrini

Shrini Kulkarni said...

Continuing on "ill-formed (for the lack of any better word) terminologies -- How about these?

Pairwise Testing (all pairs combinatorics - better word would have been "Pairwise Test design"

Model Based Testing (What is its counter part? How would be perform non model based testing? - better word would be "[Finite state machine] Model base Test design"

Action based Testing (logigear terminology - Have heard of anyone doing testing using action words)

Keyword based Testing (they wanted to say "Keyword based Test automation --oops keyword based computer assisted testing)

Business Process Testing BPT - (HP-Mercury's innovation. It is not even business process automation in strict sense)

To this list, to add some humour, we have monkey testing, guerilla Testing and banana testing ...

Shrini