Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Tuesday, April 08, 2008

Goal Question Metric - The Yellow Brick Road?

I posted this on the software-testing email group yesterday.

The replies have been fascinating; but I'm curious what you think:

Hello Folks.

Many people here have heard my own life stories about programming and testing; how, essentially, I kept getting "patted on the head' and told that I "Didn't Get It" because I opposed big extensible designs, rituals, signoffs and handoffs in the development process, and expensive, heavyweight test case programs.

I stopped worrying about it when I realized that my projects were far more successful than my peers. Eventually, I started talking about it openly.

Metrics are currently on that list. I have in my study the handbook of software quality assurance, 3rd edition, that contains a list of about 150 qualities (like scalability, security, etc) that can be measured. Then it tells you that one or two metrics will cause dysfunction, you need a balanced scorcard. And that the easy-to-gather metrics are also easy to game and bad, but that the good metrics are expensive to measure. Oh, and be careful, because your engineering staff will rebel if they have to spend too much time gathering metrics instead of doing work.

To summarize, this is what the book has to say about metrics:

"Good Luck."


Which brings me to my next sacred cow: Goal Question Metric.

GQM is a framework written by Victor Basili; you can google it. The basic idea is that instead of gathering a bunch of metrics, you actually figure out your goal (like "faster production"), ask a question that will help measure that goal, and turn that into a metric.

I have to grant that this is an intellectually valid framework, and it beats the pants off of mindless gathering of numbers. For software testing, the idea has been endorsed by people I respect like Cem Kaner and Lee Copleand.

Here's my problem: This idea has been around for a long time. When It comes to software testing, I've read a great deal of the literature, been to the conferences, read a lot of blogs.

Except for a few examples from people like Lee Copeland and James Bach here's what I always see: "If you want metrics, use GQM. Since all contexts are different, I can't give you an example."

pschaw. Is it too much for me to ask for a case study before I invest time, energy, and effort into a metrics program? One with positive ROI? Enough positive ROI that I wouldn't be better off working on other projects, or sticking the money I would have spent in a CD?


It's been 13 years since the first GQM paper was published. I haven't seen GQM provide it's value in a software testing context.(*)

Have you? I would be really interested in success stories, please.

Regards,


--heusser
(*) - Please don't say NASA. They work under an entirely different set of constraints than commercial software development. And even then, the business case is shaky.


UPDATE: Dr. Kaner replied that he doesn't really 'endorse' GQM as much as he simply mentions it during talks. His overall comments are along the lines of "GQM looks interesting, it's more grounded than nothing - if it works for you, good for you."

2 comments:

Anonymous said...

matt - saw your post on the list, but I'm glad you also posted in a "safer" place. Like most processes, I like GQM when it's done right - otherwise it's just a three letter acronym covering up bad work.

The key is getting the goal right. It sounds so easy, but if the goal is bad, the metrics will be bad. For example, if my goal is to "make johnny a better tester", the questions and metrics will all be about measuring a person - something that rarely ends well. You almost need to play the "5 whys" with goals and ask "why does johnny need to be better" and so on to come up with the real goal (possibly something silly about efficiency or productivity - I dunno).
The problem I find even with teams that somewhat get the concept is matching the engineering goals with customer quality. We should be able to have a goal that states: "Create software that the customers love and rave about", but we have no way to measure that, so we settle for goals that we can answer through the metrics we have - i.e. I see teams establish goals, then throw them away because they can't measure them pre-ship.

Thanks for letting me rant and babble - hope this stirs up some thoughts or discussion.

Anonymous said...

FAQs on Software Testing, QA, Automation and Certification at softwaretesting-faq.blogspot.com