Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, May 24, 2007

Conferences - IV

First - some errata - In the United States, next Monday is Memorial day, and I am about to take off on an extended holiday. So this may be my last post for a week.

Second, I've been doing a lot of research lately into the state of the practice of test automation, the state of the art, the state of the hype, and the difference between the three. (I'll get to the conference stuff, really.)

For example, one view of test automation is that it is a single, big button that you push, that runs all the tests, then reports status. Either you run all the features inputs and check expected results, or else you run a randomly generated set of tests against some oracle. For example - you are trying to test a statistics package, converting the inputs into forumulas, plugging that into MS Excel, and then checking for correctness. (By Oracle, I mean "Way of knowing what the right answer is", not the database or the greek mystics.)

There is an interesting class of problems that this kind of testing is very helpful for - especially if, for example, you are F-Secure and you are using massive virtualization to test a thousand configurations on linux at one time. Or if you have a simple Farenheit to Celcius conversion function, a good Oracle, and a million inputs to test.

Still, with that kind of testing, *all* the computer will check is the expected result. So if your webpage has an expected result of a $100.00 total bill, you get the correct total ... but the computer will not check for anything else. So if the wrong navigation buttons are greyed out, or there are typos, or the html tables are broken, or there are other errors, unless they are defined in the expected result, the computer won't check. In fact, Erwin Van Trier has gone so far as to recommend manual testing with no pre-defined expected result, as it can stifle your thinking.

There are other kinds of test automation - or, at least, software that can help you test.

Here are a few:

- Digial Voice Recorders - I find that popping out of the testing world to document slows me down. With a voice recorder, you can talk about what you are testing and why, and what bugs you've found, so you stay "in the zone" without having to pop up a document. Then you can document afterward, because you've left a trail of breadcrumbs.

- Screen capture tools like Spector and Snagit. With Spector, you can turn keyboard logging on and video logging on, so you know exactly how to reproduce the bug. With Snagit, you can make a movie of the bug occurring, creating a compelling story, quickly and easily.

- Test Explorer. I first found out about this at the expo at STAREast in 2005 - see, told you this was about the benefits of conferences! Test Explorer is a tool that makes manual testing go faster, by helping you record your charter, sessions, bugs found, as well as manual test cases. If you use session-based exploratory testing, it can even track your metrics for you - so you are accountable for how much time you've spent on what features.

- Little tools like Tasker, which is a windows based keystroke and mouse recorder. You can use tasker to record a stress test (file->new) then run the stress test a thousand times while you have lunch.

- Perl, Ruby, and other scripting languages than can process large amounts of data quickly. I find that I write a lot of one-off scripts with perl to, for example, make sure that every line in the file is 655 characters long, that all the dates are between 1/1/2005 and 1/1/2008, that all member ID's are nine digits, none are null, and so on. However, instead of having one big button, I usually have a half-dozen intermediate scripts and examine things along the way.

You can also use tools like tasker to do setup for expensive-to-set-up manual tests. Actually, that's another point I picked up from Jon Bach at the conference:

As testers, the things we do during can be classified into some broad lists:


1) Testing
2) Bug Investigation (Found something, now I'm going to look around)
3) Setup
4) Documenting Bugs
5) Documenting Testing
6) Going to meetings
7) Reading someone else's documentation

---> For test automation, I'm in favor of two approaches. Yes, the obvious "big button o' tests" which can work in some situations - but also - any tool that allows me to spend less time on numbers 2-7 so I can spend more time on number one.

Does that mean that a wiki is a test tool?

Absolutely.

But without conferences, I would not have heard of Test Explorer in 2005, or Wikis in 2003, or SpectorSoft probably ... ever ...

No comments: