Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, October 13, 2008

When should a test be automated - II

Before we can dive in, let's take a step back.

When people talk about automation, they typically mean replacing a job done by a human by a machine - think of an assembly line, where the entire job is to turn a wrench, and a robot that can do that work.


Oh, at the unit level, where you are passing variables around, this makes a lot of sense. You don't need a human to run the test to evaluate a fahrenheit-to-celcious conversion function, unless you are worried about performance, and even then you can just put in some timers and maybe a loop.

But at the visual level, you've got a problem. Automated Test Execution comes in two popular flavors - record/playback and keyword driven.

Record/Playback does exactly what you tell it to (even at the mouseover level) and does a screen or window capture at the end, comparing that to your pre-defined "correct" image. First off, that means it has to work in the first place in order to define that image, so the only things you can record/playback are the ones that aren't bugs to start with - but more importantly, that means if you change the icon set, or if you image contains the date of the transaction, or if you resize the windows - or screen - you could have a test that passes but the computer tells you fails.

To fix that, we created keyword-driven frameworks, where you drive the GUI by the unique ID's of the components. A keyword-driven test might look like this:

click_ok, field_last_name
type_ok, field_last_name, heusser
click_ok, field_first_name
type_ok, field_first_name, matthew
click_ok, submit_button
wait_for_element_present_ok, //body, hello matthew heusser, 30000

Keyword-driven tests only look at the elements you tell them to. So, if the text appears on the screen, but the font is the wrong size -- you don't know. If the icons are wrong, you don't know. In fact, the code only checks the exact things you tell it to.

At the end of every manual test case is a hidden assertion - 'and nothing else odd happened.'

Keyword-driven tests fail to check that assertion. Record/Playback tests try, but fail to have any discernment, to know if the change is good or bad.

But that might be just fine. Keyword/Driven might be good enough for some applications. In others, we expect the image to never change. We can use automated tests as part of a balanced breakfast to eliminate some brain-dead work so we have more time for critical-thinking work.

The question is what, when, and how much.

Stay tuned.

4 comments:

Anonymous said...

This is, of course, why I say that UI automation is a waste of time in 95% of the cases. I may even have a blog post on this somewhere :}

I also want to comment on this statement:
"When people talk about automation, they typically mean replacing a job done by a human by a machine "

I disagree - I don't want to replace human work - I want to *enhance* it. Automation allows me to simulate thousands of users, try millions of variables, or test on multiple configurations. I suppose that humans could do that, but not easily.

Gert said...

I also think novices tend to jump in and try to tackle the UI portions of automated testing. They should instead concentrate on doing the easy Celsius to Fahrenheit conversions first before embarking on these more difficult activities.

AgileTester said...

This is a nice start at revisiting Brian Marick's somewhat dated, but much still applicable paper. Thanks for taking this on in your blog!

Regarding keyword driven testing, there are other kinds of keyword driven testing than what you described in this "When should a test be automated - II" post. Keywords can implement verbs and nouns in a domain specific language (often called a "DSL"). This can even be done in GUI automation tools like QuickTest Pro. You define a set of operations in an action, and that action can be called independently and multiple times (with arguments) as needed in real testcases. This helps isolate the GUI specifics to one area of code so that only this one area needs to be updated as the screen changes.

Another form of keyword driven testing can be implemented in tables with a tool like FIT/Fitnesse. I'm still teaching myself Fitnesse. It addresses some of the reasons why Brian shyed away from automation during earlier days. If you have testability hooks in your code, you can hook into these (called Fixtures in FIT) you can test domain objects directly and perhaps perform test operations with less "keystrokes". In these cases, this will allow you to "test faster", but will require separate or manual tests of your GUI itself (if you have one).

Some test work (such as operation/coordination of embedded controllers, robotics, etc.) does not even use a GUI. I believe automation becomes even more important in the testing of this sort of complex system!

I'm looking forward to reading, learning, and discussing this article series on your blog.

sincerely,
--
Bob Clancy

Ankur said...

Nice discussion going on here... I couldn't resist myself from commenting since there was a mention of QTPro.

@Matthew: Not sure if you have only those specific concerns, most of the problems you pointed out with Record/Playback can be fixed with QTP.

@alan page:

>>I don't want to replace human work - I want to *enhance* it.

Agree 100%.

>>This is, of course, why I say that UI automation is a waste of time in 95% of the cases.

hmmm...I would say 95% of the time feasibility of automation wasn't performed adequately.