Allright folks, I'll admit it.
I'm not excited about test management tools.
Oh, you could argue that I should be. After all, Test Management tools are purchased by test managers and executives. Test managers and executives have money; they control the budget and decide who goes to what training when. Finding someone's pain point - and taking the pain away - is a perfectly legitimate business strategy. (If they have money to spend, why, that's even better, right?)
Yet I'm still not excited. Why?
Well, let's take a frank look at the thinking behind a test management tool, by which I mean something specific: A keeper of 'test cases', and a tracker of which test cases have been run against which codebase.
It starts with this thinking:
(A) We can define all our 'test cases' up front,
(B) When those test cases pass, our codebase is 'good' (Or, alternatively, when some fail but some decision maker desices to ship anyway),
(C) /Recording/ which test cases have run, and which are yet to run, in precise detail, has some value in and of itself
I reject the premise behind all of these arguments.
Here's an alternative, that we use at Socialtext:
1) Create a single wiki (version, editable web page) page for a release
2) Mark down each type of testing you want to do in every significant combination
3) For example, break the app by major piece of functionality, then further by browser type
4) Add all the automated suites or unit-test results if those matter
5) Have the technical staff 'sign up' for which pieces they will test
6) When testing on a component is completed, the tester writes 'ok' and the bug numbers he found, or, perhaps 'skipped' and the reason why.
For what it's worth, we've been doing this at Socialtext for nearly two years, since before I was hired. We are constantly tweaking the process.
This one-page overview is a higher-level view than a test management tool might provide. It shows you what matters - the failures - not 5,000 "ok" results. It assumes that the test ideas are located somewhere else that the test can find if needed. It assumes the tester actually did the testing and leaves open the possibility that the tester can explore the functionality. It leaves the tester responsible for what 'ok' means, instead of a spreadsheet or document.
This isn't a brand new idea; James Bach recommended something similar in 1999, called a "low tech testing dashboard", only he suggested it be done on a whiteboard. Other people have suggested using a spreadsheet, but that has version and read/write problems.
A wiki is just one more step forward; it provides version control, transparency, and creates a permanent artifact that could be audited. In my mind, this provides some of the benefits of test management tools with much less time investment.
So no, I'm not excited about most test management tools on the market. In many cases, I am suggesting they swat a fly with a sledgehammer. Yet I recognize that test managers and executives have legitimate problems. So let's not rush off to build something to get money; let's come up with real solutions and see if the money flows from there.
Who's with me?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com