Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Friday, August 03, 2007

Test Automation - IV

Right now one of the "louder voices in the room" for test automation is the "Agile" test automation voice. And by that I mean something very specific - that you create a large series of clerical tests that go from acceptance level all the way to unit; that you can run those tests at the click of a button, and get a green or redbar (pass/fail) very, very quickly. If any component in the system is slow, you can mock or stub it out to get that fast feedback. A few people who talk about this idea admit that you may want to actually run the system end-to-end, for example by running the full, slow test suite overnight.

This kind of test automation can be very valuable, especially for software developers, but it isn't the only kind.

Earlier this week, Ben Simo and I were kicking ideas around and we came up with:

1 Any tool that eliminates manual, clerical work is test automation (diff, loadrunner, tasker)

2 Finite-State-Machine, Random-Number Generating Tests are test automation (In the Harry Robinson sense, or, for that matter, the Ben Simo sense)

3 "Big FOR loop with an external, trusted Oracle" test automation

4 Develop it twice, run it on both systems, and compare the results test automation (Create two, not-very-trustworthy oracles)

5 Monkey Test Automation. Press Random buttons, record the order, wait for a crash, and then dump the log.

6 Pairwise Test Generation tools are test automation.

7 Any test case generator (perlclip) is test automation

This is a quick, sloppy list. What are some other forms of test automation? What am I missing?

It would be awful nice to come up with a consistent model for different types of testing tools. Failing that, I'd settle for a vision, or, to be honest, a "good start."

Elisabeth Hendrickson just announced the Agile Alliance Functional Testing Tools Visioning Workshop, in Portland, in October. I think all of these could fall into that vision. If I could only cover the travel expense and lost income, i'd be there. As it is, I look forward to see what falls out.

If you can make it, I'd encourage any regular Creative Chaos readers to attend. In other words, if you like this blog, I think you'd like the workshop, and, I would hope, vice versa ...

4 comments:

Mick Bright Kim said...

Hi

Anonymous said...

I think there are different categories within your list. For example, there is:

- Repeatable testing. That is, a single test case or test suite you need to run in the same form against different versions of the software, on different platforms, etc.

- Computer assisted testing. This is an area of testing that I think is under-utilized. For example, some of my biggest succeses in identifying memory leaks has been the windows task manager (and isolation via any debugger or memory viewer). I think every tester should have about half of the sysinternals tools in heavy rotation within their testing process.

I would put monkey testing in this bucket (others may disagree). I'm not a fan of monkey testing unless viewed as a tool to "fast forward to a bug that will probably never be hit in the real world so I can investigate and determine if this is a bug that can ever happen in the real world" type of test.

- Automated test design - pairwise tools and even code coverage tools to some extent fit in here. My work with model based testing lives primarily in this bucket, but mbt also has been successful in each of the other buckets.

Shrini Kulkarni said...

>>> And by that I mean something very specific - that you create a large series of clerical tests that go from acceptance level all the way to unit

When it comes to thinking about sapience and non sapience in testing - I am still struggling to understand "why would any part of testing" be clerical?

If yes, what percentage of testing would clerical?

I posted something similar to Jame Bach's post on Sapient processes -- He is working that comment I believe ...

TexicanJoe said...

I was wondering if I could get you feedback on an open source acceptance testing framework related to BDD and .Net; it closely resembels rbehave. I would welcome the feedback of an experienced Software Quality Assurance guru such as yourself.

http://www.lostechies.com/blogs/joe_ocampo/archive/2007/08/07/attempting-to-demystify-behavior-driven-development.aspx


Thank you,
Joe Ocampo