Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Tuesday, September 14, 2010

Test Estimation - V

So one way to estimate the testing phase (if you have such a thing), or at least testing activities, is to compare the test effort to the development effort or overall effort on other projects.

Examples:

"We spent about ten solid months on MaxStudio, and only spent two months testing. So I think testing should be about 20% of the overall dev budget."

"We spent a year on StudioViz from kick-off meeting to release, and about a month testing. So I think testing should be less than 10% of overall budget."




Both of these examples are real.

The thing is, after release, we spent the next six months fixing MaxStudio, and took a serious hit in the marketplace or reputation.

Likewise, we developed StudioViz incrementally, with many stops along the way to bring the work-in-progress up to production quality. StudioViz was also a browser-based application - well - sort of. It ran in browser control inside a windows application. So we were able to 'lock down' the browser to at least modern versions of Internet Explorer.

What all this means is that if you pick history to do a percentage of effort measurement, make sure the development model - the "way you are working" is relatively similar. Big changes in teams, in technology, technique, or method can render these sort of projections obsolete pretty easily.

But now we have two methods: Comparing test effort to test effort on similar sized projects, and using test effort as a percentage of dev effort. (That is, what percentage was it of dev effort for previous projects, look at dev effort for this project, multiply by percentage, get test effort.)

Of course, both of those measurements assume that you have roughly the same portion of developers to testers - but like I said, changing things makes projections based on past experience less and less accurate.

Of, and successful organizations tend to change. A lot.

Another method, that I've hinted at, is percentage of the overall project. Now for this to work you have to be careful, because it's very hard to measure the effort if you go back to when the idea was a gleam in someone's eye. When I've done this, I've tried to go back to when the initial kick-off happened - at which point the project probably had a full-time project manager, 'assigned' technical staff, maybe a business analyst.




Here's another quick exercise for the folks that complain "that sounds great, but we don't have the data":

Start tracking it.

Seriously. Get a pen, pencil, maybe your email box out, and start tracking just the projects you are on or the 'huge ones' swirling around you. This is easy enough to do in excel. If you want to get really fancy, start recording when the project was created and predicted due date, along with when the due-date slips occur, and how much much they slip by.

It turns out that this trivial-to-gather data can be extremely valuable when used to predict the performance of future projects.

More on that next time.

3 comments:

Anonymous said...

This is hard to follow for me, because I don't belive in "the cost of testing" - I believe in the cost of quality. There are so many factors that go into the role (and effort) of testing in that equation (even on the same team working on a similar project), that I think estimations are pointless.

Now, I realize that I, for better or for worse, live in a world where we have smart managers who know this - but rather than figure out how to work with dumb managers, shouldn't we focus on making them smarter so we don't have to explain stuff like this?

Chris said...

I always found that managers were unwilling to wait the amount of time it would take for a thorough estimate of testing effort. My back-of-the-envelope method was: find out how much time the development director estimated for coding, then take half of that and submit that as my estimate. Sometimes my manager would not be pleased with the number that produced, and ask me to cut it down.

However, once the project was finished (over time and over budget), I recalculated using the actual results, and found that, as near as dam*it, my testing effort was within 10 days of 1/2 the development time as it had finally ended up.

I don't believe this is infallible. No estimation method is infallible. However, it has several advantages:

First, it's easily explainable, even to Dilbert's pointy-haired boss. One sentence will do it.

Second, it's easily adjustable during the actual testing effort. If the programmers say that they will be four weeks late, you then add two weeks to your effort. When the pointy-haired boss comes back and says, "We need it earlier." you can then negotiate which features will be cut out in order to make that deadline.

Third, once the manager accepts that this is your rule-of-thumb, s/he can do the sums himself/herself and cut the production cloth to fit the testing.

This probably won't work for everyone, I'm afraid. It works for me so that's why I use it.

Matthew said...

1) hwtsam.com - keep reading. You're getting ahead of things. :-)

2) Chris - yes, that's pretty much what I'm advising here, and I've had close to the same results that you did with that method.