Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Tuesday, April 07, 2009

Quality, Agility, Maturity, and Discipline

A few weeks back, James Bach made a post on the death of quality. In the comments, Michael Butler wrote:

I think we should toss the word ‘quality’, period. Instead, ask people to describe what qualities they believe make good software

To expand on that with my words - we use 'Quality' as a sort of shorthand. Instead of saying exactly what we mean - which might be "fast", or "pretty", or "free from defects", or even "good value for the money", we sum up that idea in 'Quality' - a single, short, easy-to-transmit symbol.

The person who hears us then interprets that in his or her own way. And it's very easy to do. So you get into these arguments about whether or not, say, Microsoft Vista is a good quality product, and you realize people aren't arguing about how Vista performs, or it's features, but are instead arguing about the definition of Quality.

Do you think that's an argument that's ever going to actually "wind down" and come to consensus? Somehow, I doubt it.

It turns out we do this all the time in software development.

In fact, it's a very subtle subconscious ploy. Let's say you don't like something new - say, the Agile Manifesto in 2002. How do you make it clear that whatever you are doing is better than 'Agile'?

Well, you could come up with a specific, concrete examples and metrics - or you could try logic - or anecdote. But those take a lot of work and investment on the time of your readers.

You could just say that Agile Development is "Immature." Or perhaps "Undisciplined." Think that accusation never happens? Did you know that Barry Boehm published a book in 2003 called Balancing Agility and Discipline?

Yet I would hold that for any conventional definition of discipline, to do tests /before/ you write code, to mercilessly refactor, and to write acceptance tests before you start writing code - even in the face of apparent schedule pressure, actually requires a great deal of discipline.

Likewise, Maturity is another word you can throw around. It's easy enough to define a five-level (Or three level, as the case may be) maturity scale and place yourself at the top.

In these cases, "Quality", "Agility", "Maturity" and "Discpline" are all, well, code words: They mean "Good."

Using these terms allows the speaker to assert "my process/product/company is gooder than yours" and appear somewhat objective.

The problem is, if you're going to use one of these terms, you have to define what the word means for you.

Take, for example, Maturity. The simple definition of this word is "Grown Up." The more complex, common-usage definition is that the person has his or her own studied and well-considered value system - that they know what they stand for, why they stand for it, and they are willing to live with the consequences.

Thus, we call the 18-year-old who chains himself to a tree to prevent deforestation, and whines and complains and cries when he his thrown in jail immature, but the 40-year-old who expected it and takes it stoically as mature - and through age usually brings maturity, we would do so even if the ages were reversed.

So Maturity means knowing what /you/ stand for. Therefore, a maturity model needs to stand for something, to value certain things over others, because methodology design involves tradeoffs.

So companies that climb a maturity model are, in fact, saying "Look at us; we're mature, we are doing what someone else told us was good!"

I don't buy it.

But there's something else going on with maturity models - what is that value system? Is it explicit? What ideal are they striving toward? When I look at the 200-odd page CMMI document, I don't see the word Maturity defined. That concerns me.

To some extent, I argue that software development has physics envy - we want so badly to have formulas, models and numbers, that we look up to the first naive attempts at metrics and maturity without examining the system effects.

Some models are valid; the periodic table of the elements, for example, doesn't just list the elements that were known at the time - it's format and shape conforms to the nature of the universe. We could use the periodic table to know what elements we had not found yet (they were on the chart but missing) and even to predict the atomic weight of those objects.

Are our definitions of Quality, Agility, Maturity, and Discipline good enough that we can predict real improvement? Or are they just shorthand for "good according to my value system."?

I submit that when it's the latter, we are obligated to list our value system, explain why, and perhaps drop loaded words like "Maturity" and "Discipline" from our vocabulary. I've certainly tried that and I continue to try and learn by way of this blog.

I've been studying software development with a passion for my entire adult life, often publicly. You may not agree with me - but I hope you agree my approach is somewhat rigorous, studied, reasoned, I know what I stand for and am willing to live with the consequences.

That's the best I've got. I welcome your comments.


PlugNPlay said...

Hi Matthew - at an old job, as we were trying to get certified at CMM Level 1, the guy who was showing us the "CMM Way" defined maturity, in response to my request for a definition, as the ratio of successes to attempts. More mature processes have a higher ratio. I decided that asking for a definition of "attempt", much less "success", would just be beating a horse clearly already dead. But I thought you'd like to know about it.

Belteshazzar Mouse said...

Quality is subjective. We try, as engineers, to make at least some points objective so we can measure them and improve. This is possible sometimes, and sometimes not possible. And that's OK.

Quality is also a large topic. We should not reject using the term or reject trying to reduce it to objective results only because it is too hard or because we want to avoid testing. It is a fuzzy term and a large term by design. And that's OK.

When we make an argument that some practice, method or model is better than another, it is always subjective as well. We can only state our case in a persuasive way and move on. Sometimes it is best to agree to disagree and just move forward.

We test what is important to us. We identify quality by what is important to us. Hopefully this is what is important to the users as well.