Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Saturday, August 27, 2011

Tomorrow Through the Past

(Continued Apologies to Robert A. Heinlein)

This year, at the Conference for the Association for Software Testing (CAST 2011) I was a little sad, but not surprised, to hear so many people say thing like "Gosh, I had never heard of this Context-Driven thing. I've got a lot of reading and catching up to do when I get home."

Actually, that last part -- the lot of reading and catching up to do -- that is kind of encouraging. To be involved in building that next generation of super testers ... that's pretty cool.

It turns out this problem is not unique to software testing.

In the 1980's, when Tom Demarco and Tim Lister wrote PeopleWare, they pointed out that while the typical programmer might have books on programming syntax on their desk, the typical programmer they interviewed had never actually read a book on programming style, method, or methodology.

This month, on the ASQ blog Paul Borawski notes that perhaps thirty percent, or less, of the attendees at a typical American Society for Quality (ASQ) have heard of W. Edwards Deming, the champion of quality. He asks, in essence "Are we forgetting our history?"

Now ASQ is not talking about software test history, but instead the greater history of the Quality Movement -- specifically, quality in manufacturing.

It's kind of a big deal.

Consider, for example, Deming's Seven Deadly diseases of management. I'll quote the first five, using John Hunter's wording:
  1. Lack of constancy of purpose

  2. Emphasis on short term profits (Overreaction to short term variation is harmful to long term success. With such focus on relatively unimportant short term results focus on constancy of purpose is next to impossible.)

  3. Evaluation of performance, merit rating or annual review (see: Performance Without Appraisal: What to do Instead of Performance Appraisals by Peter Scholtes).

  4. Mobility of top management (too much turnover causes numerous problems)

  5. Running a company on visible figures alone (many important factors are "unknown and unknowable." This is an obvious statement that runs counter to what some incorrectly claim Deming taught - that you can only manage what you measure. Deming did not believe this and if fact saw it as a deadly disease of management)
When I worked at a Fortune 500 company, and even mid-sized companies with as few as 400 employees, every single one of these deadly diseases was considered a "best practice."

These practices were institutionalized. For example, one company had a web-based system to perform your annual reviews on. If you did not fill out your forms by a certain date, including your goals, targets, and management-by-objective), you were guaranteed no raise.

This was company wide.

Bear with me here, I'm seeing a trend.

Something Rotten in Quality-Land

You see, something happened after the quality revolution hit Japan, and Detroit had to struggle to keep up. Top management saw value in the Quality movement, but they were too busy with the busy business of production to do anything about it. So, at best, "quality" was delegated to some fresh, young engineer in the rear rank. "You go for it, Joe." they said "you're our quality guy."

At worst, it was shunted to a Vice President, who shunted it to a director, who created a manager of quality, who continued to delegate it until "quality" became the guy who forgot to step backwards.

Or, perhaps, something entirely different happened; quality was re-interpreted as "compliance", and the company created a "quality system" to comply with an ISO standard.

We could debate about the value of ISO-9001 for manufacturing; it certainly has it's proponents. My point is that Deming laid down seven deadly sins of management, and, twenty-odd years later, the top five, if not all of them are common, institutionalized practices in corporate America.

PLUS we have these people who are considered quality professionals, doing ... something ... working within a system that has it's thinking in diametrical opposition to the principles that Deming was proposing.

I know how that goes.

The Bad News

Being the delegated quality guy when management follows a different ideology and wants "that quality stuffs" but isn't willing to change or pay for it, yeah, that's hard. I've been that guy, and it's not much fun.

It's worse, however, to be titled the "quality guy", working in that same system, but ignorant of the craft. Again, we're stuck with at best you might make some small, occasional, modest improvements, at worst quality continues to get a bad name. The most likely case is that your organization will become infected with questionable metrics, unhelpful process documents, and the occasional template.

yuck.

I see similar issues in my own tiny sub-field of software testing, mostly in the area of test automation.

I have seen organizations implement test automation, expecting test cost to go down ... only to see it actually increase.

This leads me to the maxim "When you automate tests, testing costs go up."

Before I make a wild, unsubstantiated claim like that in public, I wanted to check in with a few people. A discussion with my friend, David Christiansen helped me clarify the statement a bit.

I don't mean to criticize all test automation; I'm very excited about test automation at the developer-level. Even customer-facing tests can be done well -- if you have a tight feedback loop, or have the developers doing the automation, so if they change a GUI field, they also know to change the GUI test.

No, instead I'm talking about a very specific set of circumstances. The term David used was "when the stars align."

When

* Management sees value in test automation, but the developers are too busy to do the work
* So they delegate the development of the automated GUI tests to a specialist, who does the work after development.
* These tests are designed to run at the press of a button, both exercising the GUI and providing analysis and results.
* The feedback loop between development and test is long; perhaps the tests only run as a suite before release. After weeks of growing differences between the systems, the automated tests with report failures that actually 'just' changes. These differences need to be investigated and
* And the company already has a strong, disciplined testing in place, using an approach that is at least partially exploratory
* And Management expects costs to go down

Under these conditions, I see the cost of testing going up, generally without much increase in velocity (features delivered over time) or defect rate. In most cases, velocity goes down.

That is to say, I see companies paying more for software testing and getting worse results.

But ... why?

Now I don't want to paint with too broad a brush here. People have different reasons for the decisions they make; the best I could do, maybe, would be to provide some generalities; rules of thumb that are often wrong.

Perhaps 'suspicions' would be a better term.

Did you notice the pattern between the ASQ and Software Testing? We have people who were successful doing things a specific way, and have built a worldview around that ideology. They hear about an opportunity to improve, and grab at it ... then re-interpret that opportunity around the existing worldview.

Then they outsource the work, but put constraints upon the work.

"We want that 'quality' stuffs, but don't you dare touch my annual review!"

(Come to think of it, we have a very similar problem in the United States Government. But I digress.)

But how do we fix it?

I'm afraid I don't have a "mary poppins" answer to this one. It's a hard problem, and it's far to easy to say something like "the solution is dialogue and education."

By dialogue, I mean conversation.

Companies that adapt these strategy do so because they believe they will work -- and that may be possible. So when these conversations start, we can get trapped in all kinds of different mistakes. You might say "test automation" and the other person thinks developer-testing, which I'd agree can have a lot of value. Or you might say "costs go up" and deeply offend the other person. Or they might feel insulted or condescended to.

Most importantly, these conversations are best done by invitation.

By reading my blog, you asked for my opinion -- even if you disagree. Yet many times our opinion is not ask; instead, someone in authority is handing out directions.

Responding to that direction with integrity can be a challenge for all of us; I'm working on it too.

I have a number of personal initiatives for 2012, but one of them that is just starting to heat up is "Test Coach Camp", which I hope will produce more concrete guidance on Crucial Test Conversations.

Our industries are young, the pull of popular culture and cliches is strong.

We've got work to do.

Let's get to it.

No comments: