Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Friday, October 29, 2010

On Skill

I just got back from the software test and performance conference in beautiful Las Vegas, Nevada - conference write-up here.

Just after I got back from the conference, we started talking about skill on the software-testing discussion list, and I posted this:

--- In, "Elena" wrote:
> The rest I cannot quite explain to myself -- is it intuition,
>having a bigger picture/snapshots of what it's supposed to be
>like in my head, etc?

There's an old saw that an apprentice butcher is sitting with an old butcher.
You know, they grey-hair type who's working in the shop hit entire life.

Customer comes in "I would like two pounds of pork."

Chop Chop Chop. "Here you go."

Suddenly the kid pipes up "No way that is /exactly/ two pounds! You didn't even
weigh it!!"

"Ok, kid, you weigh it then."

Sure enough, 2.01 pounds on the first slice.

"How did you /*do*/ that?"

"I'm not exactly sure, kid, but, now that you mention it ... I suspect cutting
meat five days a week for thirty years had something to do with it."

Moral: Don't let other people label your hard-won experience as "intuition" or

No, I suspect you /earned/ and /developed/ skill over time.

Skill takes work.

To be precise, the human brain has a lot of functions, one of which is a really
big neural network. So we recognize patterns.

You recognized a pattern that no one else could. That's part of what good
testers do.

I'm happy for you, man.

Don't let the anti-skill ... goofballs get you down.

all my best,


Then this afternoon Justin Dessonville posted this video to his twitter account.

It is a five-minute video of people doing the seemingly impossible: Multiple far-court free-throws in a row in basketball, a guy who throws a playing card and uses it to blow out a candle - multiple story jumps, tony hawk's impressive skateboarding feats.

I'm not sure how some of those were done; the guy might have spent six hours in front of the video camera to record one take. But some of them, like the basketball player and Tony Hawk, were clearly the result of a lifetime of practice.

Now compare that to your typical software ideology of having a defined, standard, predictable, repeatable process.

How do you write down a process called "be an awesome skate-boarder?"

You don't.

You can, however, defined moves and create a place to practice consciously. You can memorize, and repeat, and build from low-skill to high skill exercises.

I'm pleased to say that, at this point, I see a substantive part of the test-o-sphere moving from these shallow notions of repeatability into something more meaningful. If it's testing dojos or weekend testers or something else, we're finally getting there.

I'm pleased.

Now, if you'll excuse me, I gotta go practice my javascript blackflips on IE6 ...

Tuesday, October 12, 2010

Software Test Estimation - VII

So we've had a bit of an adventure; I've covered three different estimation models, each with a basis that isactually sound.

And there's a problem.

The problem is simple: We are presupposing that "tested" and "not tested" are binary statements. That there is some amount of testing that is "right" and it's universal. Doing less is irresponsible; doing more is waste.

I suspect there is more to the story.

First, consider this situation. A leader, executive, or customer hires you as a boutique tester. He ells you that "I'd like you to evaluate this product for one hour and then share with me what you know. I'll pay you a big pile of money."

In that case, I might want to be very clear about what responsibility I am taking on. If I were testing missile-launch software, I might have quite a few concerns.

For the most part, though, I don't have a problem taking on a project like this, especially if he pile of money is large enough.

In that case, an estimate is pretty easy, right? An hour, maybe two if you want a debrief and you'd like me to write up some documentation.

Most software testing projects are somewhere between "Take an hour and tell me what you find" and "how much time do you need?"

In many cases, it might be wise for us to consider test negotiation, not test estimation.

Test Negotiation

Management knows how long they want testing to take - they want it to be free. Or at least ridiculously cheap; they'd certainly prefer the one hour, the one day, the one week test 'phase' to whatever reality offers.

And that's okay.

So one thing we can do when it comes to testing is offer the cafeteria.

Hopefully you're familiar with the idea of cafeteria-style dining - each piece is sold separately, and you only pay for as much as you want.

The way the buffet works is that we come up with a range of plans, at a range of price points, that address a range of risks. We use the tools in the previous six posts to estimate them. Then we ask management "Which (testing) services would you like to pay for?"

Based on that discussion, we draw up a test strategy. Along the way, we make recommendations to management, but let them sign off.

This does two things for us. First of all, it transforms our roles from roadblocks and nay-sayers to consultants and enablers.

Second off all, it moves risk management decisions to the senior management layer.

I have been in meetings where someone asks "Why didn't QA find that?" and the VP of Engineering replies "I signed off on QA skipping that test; we decided to take a calculated risk to hit the deadline."

Suddenly, once the senior executive takes credit for the defect, it's not such a terrible thing that "QA missed that bug." Instead, it was a business decision.

I could get sarcastic about that, but the thing is: It's actually true. It was a calculated, explicit business decision.

Bringing our customers and stakeholders into the process will help them understand what we do, and understand the great task of "complete testing." (Whatever that means.)

It's a better place to be.

As to how to get there, and how to test under iterative and incremental models, well ... more next time.