Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Friday, January 23, 2009

Should I automate a test for tab order?

We've been discussing tab-order tests on my discussion list, SW-Improve, and also the Software Testing Yahoo Group.

Elisabeth Hendrickson (or "ESH", for short) put out a post on SW-IMPROVE that said that, while automating browser tests might be prohibitively expensive for many shops, she found that high-functioning agile shops doing acceptance-test driven development often had a different equation than a "traditional" shop - and - for them - GUI test automation may be cheaper and have a greater return - more quickly.

Now, ESH is one of the those rare birds who has a real understanding of agile development, a broad and deep exposure (and experience) with software testing in a great variety of contexts, coupled with an actual understanding of human nature and interaction design.

So when ESH says something, I try very hard to listen. Reading her comments made me take a very hard, realistic look at my position. Along the way, I learned a thing or two.

This is my response:

For the record, I think XP-style developer-facing TDD as per the Jeffries and Beck boks are totally awesome. This post is about browser-driving system or 'acceptance tests.'

Elisabeth Hendrickson wrote:
>So yes, if I am working in my preferred kind of context that already
>has a significant investment in automated tests, I would automate this
>test. And I would automate others like it if there are other, similar,
>expectations that we had not yet captured and articulated in automated
>tests.

If we frame the problem in the context of acceptance-tests - things the customer actually cares enough to articulate - I think a test for tab order might take sense. For example, if we are taking some sort of green-screen application that the data-entry people zip through and putting it on the web, we darn well better make sure not to require the operator to take his hand off the keyboard and onto the mouse between fields. (Yes, we might be able to automate the whole thing; different post.)

If we even bring it up and the customer says "Yes, add that to the acceptance tests", we might want to automate it.

At the same time, I assert that I could find enough of these "quick tests", that everyone agrees "should" be automated, that the cost of writing the test automation grows to 3-5x the cost of developing the code itself. I have empirical evidence from Google that can back this up, and proposed a session at Agile 2009 to discuss it.

This is Heusser's first law of software testing: The better the tester you are, the more test ideas you have - but the Good Lord doesn't give you any more time. I have found that people who want to automate every test you can think of simply can not think of as many tests as I can.

So, I /might/ do it if the customer is willing to add it to the short list of 'automated specifications', or browser-driving tests. Of course, being Agile, we want this list to be leading to working software, /not/ a comprehensive list.

Many people believe that automated tests, once written, are "free" -- I've worked in shops - one, in particular, a best-in-class, high functioning agile shop, that followed this advice, and now have a very large amount of tests - say 10,000 f thousands of automated tests. Unless the GUI is static (in which case, you're done, and there is no new information) these tests tend create /drag/ on the project. The tests tend to break under CI when the GUI changes, and you need to invest time into bringing them up, etc.

With straight ATDD, this is really manageable. With "automate everything", it's extremely painful.

Of course, I am a member of the context-driven school of software testing, so I do not believe in, and am essentially not /allowed/ to make the kind of claims of "best practice" that you may often here about testing. What I wrote above is simply my experience and current opinion. In the face of different information, I might change that opinion.


regards,


--
Matthew Heusser

3 comments:

Raoul Duke said...

i wonder if everybody would agree that: were the cost (in all sorts of senses of the term) of tests lower, they would be desired? because if we can all agree on that, then i think that a really good team figures out how to get away with murder through some new magic, ya know? and while that might be dreaming, i think a group which doesn't try to think about that at least a little bit is not a group that is maximizing its potential.

Michael Bolton http://www.developsense.com said...

I think that these discussions go screwy when we fail to clarify the premises. So I'm going to ask now:

What do you mean by automating a test for tab order?

---Michael B.

Paul said...

"Automating a test for tab order" would be a way of checking that the order of tabs on a page/screen starts in the right place and flows in the expected way (probably something that doesn't veer too far from top-bottom, left-right).