Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Thursday, December 07, 2006

Test Estimation

I thought my recent post to the Agile-Testing Discussion list was worth repeating. Here goes:

Earlier, Lisa Crispin said Test Estimation was hard, and asked if anyone had a perfect method, to which I replied:

> Ask the customer when they want it done, get a prioritized list of
> features, and deliver on the day they asked for it?

And she asked:

>...and how will we know how many of these features we will be
>able to deliver in a given period of time?

We don't. Why pretend we do?

There's a slippery slope between asking for good faith estimates ("Knowing what you know now, when do you think you can deliver?") and predicting the future.

Assuming the customer will change his or her mind about what they need, if I deliver running tested features periodically (say, every 30 days), then, ultimately, it's the customer's decision if what we have now is good enough or not. Let them pick the date.

That I can do. The crystal ball thing? Not so much. I think the best I've seen is to use velocity with yesterday's weather, or using traditional functional decompostion methods combined with Critical Chain Project Management. (I cover this a little bit in a talk I gave in Indianapolis last year - here - )

No comments: