AKA – Tripping through physical models of the universe(*)
Aristotle proposed a model of the universe with earth at the center and the heavens revolving around it. This model could not explain why days were longer in the summer, or the seasons, but it could predict the behavior of the moon – and the heavenly bodies seemed to support it.
Copernicus and Galileo changed the model, and insisted that the earth rotates the sun, as do Mars, Venus, and the other planets.
The only problem is, well, it didn’t work. Copernicus thought that the earth traveled in a circle, and it actually travels in an ellipse. Gelileo’s observations could confirm the theory in some aspects, but in others, the math wasn’t quite right. It wasn’t until Newton invents the calculus that was get the planets traveling in an ellipse and equations good enough to predict behavior.
That is, er … good enough to predict behavior most of the time. Objects that were very small or very fast tended to be "off" of what Newton’s equations would suggest. Still, most things here on planet earth fit well into Newton’s methods; it wouldn’t be until Einstein that we figure out the equations to calculate space-time distortion for objects as they approach the speed of light.
That is more than the brief outline of scientific history – it is the story of the evolution of a model. At each level, the model becomes more precise, more detailed, more formal, more "correct", and a better way to predict (or analyze) the behavior of objects(**). Also, that more "correct" models tend to take more time and effort to learn, understand, and master, and that a model can be “good enough” for the person solving it. Third Century British Farmers didn’t need Newton to raise crops, and High School students in Physics I don’t need Einstein to predict the distance a cannonball will fire.
Now, I say that lower-level models take more work and more variables. In a hierarchal organization, the higher levels do not have the time or the attention for those details; they are managing many, many more projects in a stack. The test project that is your life's work is only a "phase" to the project manager, and one part of one project to the PMO. To the CIO the whole project is a "tactical objective." To do the detailed model well, you need a lot of variables; the CIO doesn’t know them at that level, and the detailed middle-manager probably doesn’t have the authority to answer them definitively.
So we end up having these conversations where someone "just" wants a single number to put on a bullet-point on a powerpoint slide, and we say "it depends, what about ..." and try to get the variables for the model. It can be painful.
Here’s one technique I have used with some success: Ask for the margin of error. When you’re told that it must be very precise (+ or – 5 days, for example), physically roll up your sleeves and say something like "ok, let’s get to work." Then plan a half-day session to hammer the thing out, with the key players involved.
Building an accurate predictive model is, well, relatively impossible(**); ask anyone who plays the stock market. Far too often, we expect precise, Einstein-like answers when we only have the data of 3rd Century Farmer.
Perhaps, then, we should estimate like a 3rd Century Farmer "We expect to harvest at the end of 3rd quarter; it depends on the weather ..."
Of course, there are other tricks - both for estimates and for models. More later.
(*) - We use models and prediction every day. When we say that "Bob is a jerk, he won't help", we are making a stereotype, or model, of Bob. It may predict his behavior, and may even do so accurately, but it could be that Bob isn't a jerk at all. He might be over-worked and going through multiple deaths in his family. As we improve our understanding of the world, we can 'model' it better - these are informal models, and they are far more common than formal ones with mathematical rigor.
(**) – "predict" is a funny word. Netwon’s equations allow me to predict the speed that an object will hit the ground if I know the height it fell from, but I am not suggesting that I have a crystal ball and can predict the future. Any project will have risks, and if they all come true at the same time, well – forget about it. The point is that more analysis allows us to make better educated guesses.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com