Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Sunday, November 21, 2010

The Drake Equation of Software Testing

In the 1960's a scientist named Frank Drake came up with a formula to predict the probability of life on other planets -- specifically the chance they would evolve to the point that we could contact them. That equation came to be known as the Drake Equation.

The drake equation is roughly this:

N = R * f(p) * n(e) * f(e) * f(l) * f(i) * f(c) * L


N = the number of civilizations in our galaxy with which communication might be possible;
R = the average rate of star formation per year in our galaxy
f(p) = the fraction of those stars that have planets
n(e) = the average number of planets that can potentially support life per star that has planets
f(l) = the fraction of the above that actually go on to develop life at some point
f(i) = the fraction of the above that actually go on to develop intelligent life
f(c) = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time such civilizations release detectable signals into space.

This sounds impressive. I mean, if we could just determine those other variables, we can determine the chance of life on other planets, right?

But ... Wait

It turns out that Drake's equation really says that one unknown number that can only be guess can be calculated as a function of seven numbers ... that we don't really know either and can only be guessed. (And if you try, you run into Kwilinski's law: "Numbers that are 'proven' by multiplying and dividing a bunch of guesses together are worthless.")

Now think about this: As an actual menaingful number, drake's formula is pretty useless. If any of the guesstimates are off by a wide margin (or you can't even really predict them at all), then your answer is a non-answer. Worse than no answer, a wrong answer wastes your time and pushes you toward bad decisions.

Yet what if we didn't try to come up with a 'solid' number, but instead use Drake's equation as a modeling tool -- to help us better understand the problem? To help us figure out what questions to ask? To guide our research?

Suddenly, Drake's equation has some merit.

Back to software testing

Over the next few months I plan on doing some work in the area of the economics of software development -- testing specifically, but also other aspects. To do that work, I intend to throw out some illustrative numbers to help model the problem.

I don't claim that those numbers are "right", nor that any numbers are "right"; the economic value of the software project will develop on what the project is, who the customer is, what the technology stack is, the value of the staff, the time in history ... illustrative numbers are overly simplistic.

So I'm going to abstain from "proving" final answers using those numbers, instead using them as illustrative numbers, to tell as story - that it could be conceptually possible for a certain technique to work if things turned out like the example.

With that foundation model in place, we can make different decisions about how we do our work, and see how it impacts the model.

I want to be very clear here: I'm going to throw up ideas early in order to get feedback early. The ideas I throw out may be cutting edge -- they will certainly be wrong, because all models are wrong. But they might just have a chance to positively impact our field.

I figured it's worth a try.

Plenty more to come, both here and on the STP Test Community Blog.

No comments: