So one way to estimate the testing phase (if you have such a thing), or at least testing activities, is to compare the test effort to the development effort or overall effort on other projects.
"We spent about ten solid months on MaxStudio, and only spent two months testing. So I think testing should be about 20% of the overall dev budget."
"We spent a year on StudioViz from kick-off meeting to release, and about a month testing. So I think testing should be less than 10% of overall budget."
Both of these examples are real.
The thing is, after release, we spent the next six months fixing MaxStudio, and took a serious hit in the marketplace or reputation.
Likewise, we developed StudioViz incrementally, with many stops along the way to bring the work-in-progress up to production quality. StudioViz was also a browser-based application - well - sort of. It ran in browser control inside a windows application. So we were able to 'lock down' the browser to at least modern versions of Internet Explorer.
What all this means is that if you pick history to do a percentage of effort measurement, make sure the development model - the "way you are working" is relatively similar. Big changes in teams, in technology, technique, or method can render these sort of projections obsolete pretty easily.
But now we have two methods: Comparing test effort to test effort on similar sized projects, and using test effort as a percentage of dev effort. (That is, what percentage was it of dev effort for previous projects, look at dev effort for this project, multiply by percentage, get test effort.)
Of course, both of those measurements assume that you have roughly the same portion of developers to testers - but like I said, changing things makes projections based on past experience less and less accurate.
Of, and successful organizations tend to change. A lot.
Another method, that I've hinted at, is percentage of the overall project. Now for this to work you have to be careful, because it's very hard to measure the effort if you go back to when the idea was a gleam in someone's eye. When I've done this, I've tried to go back to when the initial kick-off happened - at which point the project probably had a full-time project manager, 'assigned' technical staff, maybe a business analyst.
Here's another quick exercise for the folks that complain "that sounds great, but we don't have the data":
Start tracking it.
Seriously. Get a pen, pencil, maybe your email box out, and start tracking just the projects you are on or the 'huge ones' swirling around you. This is easy enough to do in excel. If you want to get really fancy, start recording when the project was created and predicted due date, along with when the due-date slips occur, and how much much they slip by.
It turns out that this trivial-to-gather data can be extremely valuable when used to predict the performance of future projects.
More on that next time.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com