Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Wednesday, December 27, 2006

Against Systems

As a military cadet, I had a few occasions to design systems - generally point systems. For example, the number of points required to graduate from a summer encampment, or a merit/demerit system.

I typically would write a page that gave guidelines and concluded with "Plus or minus (some big number) for items of exceeding excellence or discredit", and failed to meet expectations. What my superiors wanted was a complex, detailed, organized, predictable system. They wanted something comprehensive.

That always amazed me. First of all, if that could be done, someone would find a way to game it. Make Public Displays of Affection (PDA) a 1-point demerit, and some cadet would end up embracing his girlfriend during pass-in-review, collect the demerit, and reply "it was worth it." OR, for encampment graduation, a cadet could do the absolute bare minimum to graduate, reflecting a negative attitude that was undeserving, while a handicapped cadet (we had a few) might do his best but not quite make it.

It also happened on promotion forms. You would have a cadet that just didn't get it, but you'd be forced to use the CAPF 50 (leadership eval form) to do an objective evaluation. Sadly, "Gets it" level was not on the sheet, so the overall score would be too high. What do you do? Pass the cadet, or systematically mark him down in everything to fail, or spend a hour trying to figure out how to "fairly" complete the feedback form, with accurate feedback, that resulted in the outcome you desired?

This problem isn't just limited to me. One of the themes of the movie Thirteen Days, which is about the cuban missle crisis, is the desire of the Military Establishment to escalate the cuban missle crisis to war. To do this, they get the president to agree to a set of "rules of engagement", then try to use brinksmanship to escalate the level of conflict until US Troops are in harms way. Then the rules of engagement would require a counter-attack to "defend" out troops.

I'm all for decision support systems (DSS). There's a diffence, however, between a DSS that provides information and one that makes the decision for you. As a Decision Maker, why would you force yourself into a system that limits you? Why develop a system that takes over, limiting your ability to use common sense and good judgment? Why would you want that?

Often times, there are some perfectly good reasons for this. Some ERP systems, for example, can do a better prediction of trends than a human can, thus limiting the amount of excess inventory that needs to be carried but ensuring the shelves are stocked. In other cases, like hiring, there is a real risk of being sued for picking one person over another. Objective systems that decide for you, say for example, a point system, will limit legal risk.

Or, I dunno, say you are running a conference and you want to provide feedback to the people you rejected. Having a templated form that every member of the committee fills out that you can average looks a lot more impressive than saying "Well, we talked about it, and you didn't make the list. Try again next year."

But, there's a problem, and that is this:

Most first-pass objective systems suck. They really, really, suck.

I'll say it again: Most first-pass objective systems suck.

I've known this since childhood, at age 15, when I tried to make my first merit/demerit system, but I did not find out why until much later. Wording my explanation is hard, but I'm going to try, so please bear with me:

1) Modeling system effects is hard. When you reward something, you get more of it, but you get more of exactly what is measured. You want people to get leadership training, so you give them points for taking the class and they are going to take the class. But you really want them to learn about leadership. Did that happen? Maybe. Maybe not.

2) The more complex the system, the more variables.
Modeling a 2-hour-a-week-plus-some-weekends-and-encampments military environment, with the same rough goals and training schedule is hard. Imagine the workplace, where each project is different!

3) The more variables, the more interactions, reactions, and unintended consequences.

In the 1970's and 1980's, the great solution for this was going to be Artificial Intelligence (AI) and neural networks; computers that could learn. It's actually pretty easy to build a system in LISP that can look at what books you like, find other books that other people like, and make recommendations. When you are dealing with a closed system with a few million books, it's pretty easy.

Real life is not a closed system

It turns out that it's easy to make a CASE system that requires a requirement for every check-in, or a requirements template to be complete before coding begins.

But the judgment that the template is filled out well? That is best done by a human. It is very hard to make an AI program capable of assessment, or any of the higher levels of bloom's taxonomy, except in very specific applications, in which case the computer is really just parroting back what some human told it.

Why I am talking about AI?

Because most processes and systems are really really bad AI - AI programming in a vague and ambiguous language that is much closer to BASIC than LISP.

This is a huge part of the problem that CMM(I) and ISO 9000 have. They want to be one-page descriptions that say "Do the Right Thing" or "Do Good Work", but you need to define "Good" and "Right", and to try to do that a crappy language like English which is worse than BASIC, while dealing with all of the variables in software development is, well ... hard.

To my knowledge, there is only one computer than can synthesize all this information, and it is the human brain. The role of the human brain is to make sense and integrate the world around it. If you've ever had intuition, or a gut feeling, you know that can be a lot more than emotion. It can be the left and the right sides of your brain working in concert to solve a problem on the subconscious level. And where process descriptions fail, the human brain can be surprisingly good at solving problems. (For example, to quote Michael Bolton: "If your project has dug itself a hole, your process ain't gonna pick up the shovel.")

The job of collecting, synthesizing, and making a judgment is a craft. Like art, writing, development and testing, judgment can be improved with practice. In future entries I would like to explore a few of these exercises.

For the time being, here's my $0.02: Be skeptical of systems that spit out answers about behavior and judgments. Ask questions about the weighting.

Remember this: If you are a decision maker overseeing such a system, it can exist to make the decision for you. To quote Richard Bach:

"If it's never your fault, you can't take responsibility for it. If you can't take responsibility for it, you'll always be its victim."

1 comment:

Unknown said...

I quite agree on almost everything. To me systems modeling it's just a sort of backup. A good human brain usually does a better job, than a set of rules. But a set of rules is your best livevest if you have no good brains available (or if the good one leaves).