Before we can dive in, let's take a step back.
When people talk about automation, they typically mean replacing a job done by a human by a machine - think of an assembly line, where the entire job is to turn a wrench, and a robot that can do that work.
Oh, at the unit level, where you are passing variables around, this makes a lot of sense. You don't need a human to run the test to evaluate a fahrenheit-to-celcious conversion function, unless you are worried about performance, and even then you can just put in some timers and maybe a loop.
But at the visual level, you've got a problem. Automated Test Execution comes in two popular flavors - record/playback and keyword driven.
Record/Playback does exactly what you tell it to (even at the mouseover level) and does a screen or window capture at the end, comparing that to your pre-defined "correct" image. First off, that means it has to work in the first place in order to define that image, so the only things you can record/playback are the ones that aren't bugs to start with - but more importantly, that means if you change the icon set, or if you image contains the date of the transaction, or if you resize the windows - or screen - you could have a test that passes but the computer tells you fails.
To fix that, we created keyword-driven frameworks, where you drive the GUI by the unique ID's of the components. A keyword-driven test might look like this:
type_ok, field_last_name, heusser
type_ok, field_first_name, matthew
wait_for_element_present_ok, //body, hello matthew heusser, 30000
Keyword-driven tests only look at the elements you tell them to. So, if the text appears on the screen, but the font is the wrong size -- you don't know. If the icons are wrong, you don't know. In fact, the code only checks the exact things you tell it to.
At the end of every manual test case is a hidden assertion - 'and nothing else odd happened.'
Keyword-driven tests fail to check that assertion. Record/Playback tests try, but fail to have any discernment, to know if the change is good or bad.
But that might be just fine. Keyword/Driven might be good enough for some applications. In others, we expect the image to never change. We can use automated tests as part of a balanced breakfast to eliminate some brain-dead work so we have more time for critical-thinking work.
The question is what, when, and how much.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com