Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Wednesday, July 25, 2007

Estimates - II

As usual, some of my commenter’s have written my article for me ...

Seriously, Ben Simo points out that is it always possible to give an estimate, but that all estimates are wrong (if they were right, they would be commitments). Shrini points out that we get lousy requirements for our estimates. For example, when asked:

"When will it be done?"

You probably have a vague and fluffy definition of "IT" and a vague or unrealistic definition of "done."

For example, let me re-translate:

"When will it be done?"

Could be:

"How long will it take you to do everything I've written down here, with scope creep that _I_ deem reasonable, without a single defect or flaw - given that anything over a month is too long?"

Yikes!

Of course, I promised some actual answers. So here goes ...

The "GAP" between what the software actually IS and how it was analyzed is a serious pain point in software development. That gap widens with time - so my first suggestion is to never run a project more than about a month. (Yes, require/dev/test/prod in thirty days.) Schedule any "big" project as a series of small ones.

A worst, you might be two weeks late. That's not the end of the world.

If you have to run longer than a month, then periodically bring your code up to production quality. Develop features end-to-end, in thin slices, and add them to the system.

That's not really an estimating hint - it's a project organization hint. Estimates of less than a month are drastically easier than more. Moreover, they are small enough that an employee can feel a real personal commitment to the date, and work a little overtime to make it. (On a ten-month project, once you realize you're late, overtime will just kill you, not bring the project back on track.) So you organize the project so you never have to estimate more than a month at a time.

Plus, you can prioritize the customer's feature requests, so you build the most important thing first.

So what if that is not an option?

Ok, next idea. Do two types of estimates - realistic and pessimistic. Make the realistic estimate your goal date, and make the pessimistic a commitment back to the business. Within the team, talk about the goal; with management, talk about the commitment.

A third - Estimates can be done in two different ways. The first is an analysis task where break down the features into chunks and add up the chunks, or you take the spec and do some math to come up with an approximate number of test cases. The second way to do estimates is to actually start doing the work - at a very high level. Go ahead and _DO_ the design, or start to decompose the test cases, or write "Tracer bullet" objects to do the highest levels of disk, screen, and database interaction. Leave comments or "TODO" markers at where the next step will be. When you've finished that, come back to the TODO markers, estimate them, and add them up. Finally, add a little extra for risk - the bigger the risk, the bigger the buffer.

The thing is, the further you go into the project, the more you'll know. I have one colleague who simply avoids making any commitments or estimates at all. He'll say things like "if you say so" or "that's certainly our goal" but never make a promise. I'm not advocating that - but certainly the further you get into the project, the more accurate your dates will be.

(In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)

Looking back, most of this is not how you should do the estimates - but how to communicate them so they are not abused or mis-understood. This can be very challenging; as my old friend Eric likes to say "Sooner or later your three paragraphs of very specific if-thens and assumptions are going to turn into a PowerPoint bullet with a date on it. Tread lightly."

If that's the case, then you have a few options. One is to hit any arbitrary date by structuring the project so that you can go to production every month. Another is to make no commitments and blame someone else. A third is to way is to continually and consistently communicate risk-adjusted dates back to your customers.

My final thought is to express the cost of every change. It's usually not the project that kills us - it's the feature changes at the end. And that's fine - it's good - it makes us provide software that has better fitness for use. When that happens, though, either make the cost explicit (by adding the feature and insisting on moving the date) or pay for it with your nights and weekends.

The choice is yours ....

2 comments:

Shrini Kulkarni said...

>>so my first suggestion is to never run a project more than about a month.

Interesting suggestion. This is what I believe "agile" model of software development has adopted but for different reasons I believe.

But, this is a change in the model of development/Testing/everything else. How will one be ready to change their way of doing this only to suite the estimation model?

Though a vague analogy - it is like a QA/Testing team dictating the product ship date ...
Provding estimation is a service that we (Testers/Test managers) offer to stakeholders ... Do we have the luxary or locus standi to dictate the model for software engineering in general?

I see that most this post looks like the one addressing a development work and not having anything specifically to testing... Request to add some notes/posts for Test Estimation ....

One suggestion that I can offer you with respect to test estimation is - estimate in two or three stages.

One of the typical problems associated with Test estimation is we do estimation very early in the process and rarely update/revise them. That makes the initial estimates as "commitments".

I have also seen the typically estimates are expected to be completed in the matter of few days some time even hours.

What I would do is - do quick and dirty estimation to start with. Then submit that to the stakeholder with a "disclaimer" that actual could be 30-40% OFF. Then say -- estimates would be revised after a phase in testing - let us say test case design complete etc...

Shrini

Ben Simo said...

I think a major contributing factor to the gap between estimated and actual effort in software projects is that we keep trying new things. It appears that doing something new is more common in software development than in the development of physical goods.

Estimates are likely to be better if we have had past experience doing something. We can use that past experience as a basis for estimating the next project IF the projects are comparable. The problem in software projects is that they are often not comparable.

Technologies change. Team members change. The products change. The customers change.

In software, we don't have to rebuild the same bridge or house in another location: we just copy it to another location. Sometimes we enhance existing software to use in a new context.

When we build new software, it is likely to be unlike anything we have built before. If it weren't different, we'd likely reuse something old.

The more a project differs from past projects, the less accurate our estimates.