George Dinwiddie has a great response to my previous post on standards. In his post, George says:
Why do people want to choose a standard before they’ve tried something to see how well it works?
... (snip) ...
Processes, like software, don’t work well unless they, well, work. So get it working first, and then worry about whether you should standardize on it.
A short while ago I worked tangentially with a team that was implementing a business intelligence tool. The architect of that team was quick to point out that they were really implementing a business intelligence service - a process. In that case, it was important to get the process right. So, once the servers were installed, but before they rolled the software out to a customer, the team had to develop the methodology. They had to define the process to requesting a BI universe, for defining it, for scheduling it, for prioritizing. For each of these sub-processes, the team had to use a standard template. The architect was very clear that the template had to be filled out correctly, and there was some amount of back-and-forth over how to correct describe a methodology component.
They had a template standard. They had a standard standard.
And, eventually, the group threw it all out, choosing to instead actually run a project, then document what they did.
The odd thing that I have seen is that groups that do this often fail in a way that is predictable and repeatable - yet don't see the problem.
For lack of a better term, I will call this "Paradigm Blindness" - when you see the way you work as "right" and build defenses against it, instead of admitting that experiential evidence should guide the way we work.
With paradigm blindness, when you have documented test scripts and quality stinks, you say "Next time, we need to do a better job documenting our test scripts."
When the requirements process is a big, painful waste of time and the customer is unhappy with the results, you say "Next time we need to get the requirements right up front, before we ever right a line of code."
When the company keeps trying to increase the accuracy of it's estimates and failing, you focus on getting "more accurate estimates" instead of trying to deal with uncertainty as a real thing.
In the mean time, there's a quality guru named Deming and he came up with an idea called the Plan-Do-Check-Act cycle - where you do a little planning but then conduct an experiment to see if the standard will work, then check results, then move forward - often, you do these in cycles.
There's another name for this: The Scientific method. And when methodology proponents argue for processes that defy the scientific method as "right", you've probably got a case of paradigm blindness going on.
I've been seeing this more often (see the last few posts.) Does anyone have any ideas on how to deal with it? :-)
POSTSCRIPT: I was a bit worked up when I typed this. I hope it makes some sense.
A google search points out that "Paradigm Blindness" is apparently, a real phrase - not quite real enough to make Wikipedia, though. The best definition I found was "Paradigm blindness is a term used to describe the phenomena that occurs when the dominant paradigm prevents one from seeing viable alternatives." From this web site in Google Cache.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com