I've been putting off writing this. It is, to be honest, a little painful.
Some metrics, like expenses, income, and cash flow, for example, are really really important. You need to track them.
And, in a vehicle, you certainly want to know how fast you are going and if your tank is empty or full.
All of those examples are fungible - a gallon of gas can be traded for any other gallon of gas. A penny saved is a penny earned. They are all the same.
Yet test cases, lines of code, these things are not the same. You can have a test case that takes two hours to set up, or a half-dozen similar ones you can run in thirty seconds. If you are measured by test cases executed per day, which do you think you are going to focus on?
I've mentioned that point before, but I thought it was worth mentioning twice.
But wait, there's MORE!
This is the part I didn't want to write. What's the purpose of your metrics program? Well, to be terribly honest, these are reasons I have seen for corporate metrics programs:
1) The people on the helpdesk, in operations and finance have them. Without them, we look kinda dumb.
2) Metrics seem to /prove/ things. Without metrics we are down to our stories; why should senior management believe those stories?
3) Some auditor told us we had to have them to be mature.
4) Because we /desire/ easy control over software projects.
Hopefully the previous post about metrics convinced you that, like the beer commerical where the swedish bikini team suddenly appears - the promise and the result rarely align.
There are, however, other reasons to gather metrics than formal programs designed to create 'control.' Do-ers, and even managers, can gather metrics every day in order to understand what is going on in the system.
Example: Querying the bug tracker to figure out which browser is the most problematic - and should get the most testing time, is a reasonable thing. Do it once and you're likely to get unbiased results - no one was manipulating the system when you took the sample.
Now, on the other hand, if you set a corporate goal to decrease the % of released bugs in internet explorer and measure it every week, and you are allmost certain to introduce dysfunction.
So metrics as a tool by individuals to improve performance - with no intent or evaluating people or "controlling" the process? Sure. I'm all for it.
So how can we respond when asked for metrics for some of those ... less noble reasons above?
The pyramid of information
As a do-er, Joe has to manage himself. His boss, the manager, has to manage ten people. His boss, the director of software engineering, has to coordinate ten projects - and his Boss, the VP of Information Systems, has ten big projects, three corporate initiatives, and fifty small projects going at one time.
The information received from each person - then each project - has to get smaller as you go up the chain. Middle management metrics seem to focus on process, while senior management cares about outcome.
This causes a disconnect when middle management presents those metrics and senior management asks, awkwardly "so ... we have 300 open bugs. What does that mean to me, exactly?"
If you're struggling with metrics, one solution is to give middle management better ones - metrics that will actually address the concerns of the big boss.
In my experience, what metrics does the big boss want?
For each project
- Is it on time?
- Is it on budget?
- Is it on features?
- Is it at risk for some other reason?
- How's the ROI looking?
- How do you feel about the quality?
What's the best way to do this, in my experience?
Make a spreadsheet. For each project, have the projected go-live as a column. The next column is either Green (good), Yellow (hmm), Orange (lookout) or Red (oh dear). In the next column, explain the color. If you've got one, the next column is a link to a wiki page with the detailed status.
The big boss can scan and drill into any project he has concerns about in a very traditional way - by talking to people. Your spreadsheet gives him the tools to know which projects need drilling.
A "metrics expert" would point out that the spreadsheet above is a qualitative metric, which does not enable a quantitatively managed process.
For a response, I'd send him a link to "Metrics, Schmetrics Part I".
Have reached the end of this brief article. And if each reader of Creative Chaos as one year of experience, combined, that's a few thousand years. What metrics have you had success with? I'd like to know - and - likely - so would a thousand other people reading this.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com