Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Wednesday, January 31, 2007

HP Culture, Circa 1976


Wozniak:"No, I'm never going to leave Hewlett-Packard. It's my job for life. It's the best company because it's so good to engineers." It really treated us like we were a community and family, and everyone cared about everyone else. Engineers—bottom of the org chart people—could come up with the ideas that would be the next hot products for the company. Everything was open to thought, discussion and innovation. So I would never leave Hewlett-Packard. I was going to be an engineer for life there.

...

Livingston: You were designing all of these different types of computers during high school at home, for fun?

Wozniak: Yes, because I could never build one. Not only that, but I would design one and design it over and over and over—each one of the computers—because new chips would come out. I would take the new chips and redesign some computer I'd done before because I'd come up with a clever idea about how I could save 2 more chips. "I'll do it in 42 chips instead of 44 chips."

The reason I did that was because I had no money. I could never build one. Chips back then were... like I said, to buy a computer built, it was like a downpayment on a good house. So, because I could never build one, all I could do was design them on paper and try to get better and better and better. I was competing with myself. But that's just the story of how my skill got so good. It's because I could never build anything, I just competed with myself to come up with ideas that nobody else would come up with.

- Steve Wozniak

(Link leads to excerpt from Founders At Work)

In the article, we see that Steve is obsessed with the idea of creating designs that are simple and elegant. That just sort of, well, work. That can flow intuitively.

It's the same with software. If your software sounds complex and hard to understand, it's probably buggy. Keeping the software simple - searching for the elegant solution instead of the easy one - leads to less lines of code, faster development, and fewer defects. A few years ago I wrote Beautiful Code for Dr. Dobbs, but I suspect it needs a follow-up. Beautiful Systems? Beautiful Solutions? hmm ... needs work.

Tomorrow: Blue Man Group. Serious this time!

Tuesday, January 30, 2007

Errata

I'm pretty buried in side projects right now, but I did want to get a few things out.

First, I found this bit from Joel Spolsky, excerpted from the book Founders At Work, and wanted to share:

I think probably there are a lot of workaday programmers working on upgrades to Enterprise Java (now I've insulted all the Java programmers) who never achieve flow. To them, it's just kind of engineering step by step; it's never the magic of creation.

- http://www.foundersatwork.com/joelspolsky.html

If you have ever considered going independent, or just find the subject interesting, I think you'll find it a good read.

Second , my speaking schedule for 2007 is out. I will be moderating lightning talks at STAREast in May, and the "Call for lightning talkers" is on ...

Wednesday, January 24, 2007

Against Systems - IV

I've been looking for a way to wrap this up, when I stumbled accross and article titled "Process as a Substitute for Competance" that seems to do it for me ...

On one recent project, the software development managers insisted on a 5-page requirements definition document, two management reviews of that document, a functional specification with four contributors, a peer review and a management review of that document, a detailed design document, a management review and sign-off of that document, and a test plan before development could begin on the code.

The code in question? Five lines of SQL.

...

The problem with this model, though, is that anyone who is halfway competent finds himself spending more time on process than on actual, productive development activity. Everything is drawn out over a huge period of time. I think this is the main thing that's been driving alternative methodologies like Agile and SCRUM -- the desire by skilled developers to sweep away all that process that's "in their way," and get rid of documentation and meetings.

- http://killthemeeting.com/mt/2006/10/process_as_a_substitute_for_co.html

... I was wrong

A few weeks ago I put out a post on Elisabeth Henricksen's 2002 presentation, "Why Are My Pants On Fire?".

My conclusion was that the sentiment was obvious, but people don't act that way. While we might "know" the material intellectually, our behavior indicates otherwise. So, while the material wasn't new, we still need more people saying it.

After three weeks of talking to folks, it's becoming clear to me that this is, in fact, not obvious to a great many people.

That scares me. It tells me that we don't need more people presenting ideas like this --- we need a lot more.

Elisabeth, please accept my apology.

Monday, January 22, 2007

Against Systems: Interlude ...

More Against Systems to come, but first, a couple of things ...

1) Jon Kohl points out that we only exist because of little systems like circulation, muscular, the solar system, and so on. Obviously, the blog entries "Against Systems", which I chose for a bit of shock value, are talking about a different kind of systems: Systems developed by humans that are, well, naive at best. If I had to do it again, I'd probably choose a different title, but right now, too many people have linked to the series for me to change the name (and have blogger change the name.) Perhaps I can find a better name for some follow-up-posts.

2) In November I saw Chad Fowler speak at XPWestMichigan, and blogged a bit about it.

Well, the talk is finally Finally Up on Google Video and you can see it for yourself.

When you hear the nerd who likes perl and raises his hand ... that's me. :-)

Friday, January 19, 2007

Against Systems - III

Did you know that honor codes that consist of a specific list can be gamed? For example, the US Air Force Academy Honor Code, at one time, was:

"We shall not lie, cheat, nor steal, nor tolerate those among us who do."

So, let's say you are nineteen, go to a tavern and order a beer. The bartender doesn't ask your age. You don't tell him. Did you violate the honor code?

Think about it. You didn't lie. No one asked you a question! A lot of Air Force Cadets in the 1980's said "No", it wasn't an honor code violation, and therefore found bars where they would not be carded.

The problem is that the more precise you make it, the more holes enter into the system. Trying to nail down all of these little exceptions with more clarifications doesn't really help; you either get a legal brief that no one reads, or you leave room for someone crafty to think of exceptions.

There is another way. I found this on Wikipedia Today:


Washington and Lee maintains a rigorous Honor System that traces directly to Robert E. Lee, who said, "We have but one rule here, and it is that every student must be a gentleman." Students, upon entering the university, vow to act honorably in academic and nonacademic endeavors. While "honor" is often interpreted as meaning that they will never lie, cheat or steal, the Honor System actually proscribes whatever behavior the current generation of students decides is dishonorable.

The Honor System has been run by the student body since 1906. Any student found guilty of an honor violation by his or her peers is subject to a single penalty: expulsion. Faculty, administration and even trustees are powerless; the Honor System is defined and administered solely by students, and there is no higher review. Referenda are held every three academic years to gauge each generation's appetite to maintain the Honor System and its single penalty, and the students always re-ratify the Honor System by a wide margin.

Washington and Lee's Honor System is distinct from others such as those found at the neighboring Virginia Military Institute and the University of Virginia because it is not codified. That is to say, unlike those others, Washington and Lee's does not have a list of rules that define punishable behavior.

The Honor System encompasses fundamental honesty and integrity. Other disciplinary frameworks exist to address lapses of social and behavioral standards that do not fall into the category of a student's basic honor. (If you cheat on an exam or take a book from the library without checking it out, it's an honor violation. If you go 55 in a 50-mph-zone, it isn't.)

As a result, a sense of trust and safety pervades the community. The faculty and staff always take students at their word (and indeed, local merchants accept their checks without question; many also extend credit). Exams at W&L are ordinarily unproctored and self-scheduled. It is not unusual for professors to assign take-home, closed-book finals with an explicit trust in their students not to cheat.

The Honor System clearly works. In most years, a few students are expelled after trials conducted by the elected student government (with the accused usually counseled by law students). Recently, expulsions have ranged from 8 in the 2003-04 school year to a more modest 2 in the 2004-05 year. Students found guilty can appeal the verdict to the entire student body, although this daunting option is not often exercised.


Something tells me that the eight-to-two people that got expelled were expelled for a fine reason, and that appeal would just make them look bad.

Is that a "defined process"?

If it produces reliable output, why do we care?

Update: Just FYI, it's the 200th birthday of Marse Robert.

Three days in Indy?

I will be in Indianapolis on Thursday, March 22nd, Presenting Rethinking Process Improvement at a meeting of the Indianapolis Quality Assurance Association (IQAA), with a one-day workshop on software requirements the day following. We're value-pricing the workshop to make it affordable - $250 for a one-day class - about the price that a corporate IT manager can expense without having to get sign-off from a vice president.

I haven't worked out the schedule 100%, but I hope to stay in town for the March Meeting of the Indianapolis Workshop On Software Testing, which is March 25th.

See you there?

Thursday, January 18, 2007

Intuition

James Bach is running a rapid software class over the web, and I am enjoying it. (You can take the course for $250 per session; read about it - here).

He also has forums set up for people who are taking the class.

Here's my latest post to those forums:

Michael Bolton Wrote:
Intuition, to me, means "some cognitive process that I can't articulate (mostly because I haven't tried, or I haven't really thought too hard about it)."

I dunno about that. I might agree with "often" instead of mostly.

A few years back, I heard a speech by Dr. James Dobson, where he claimed that in females, the connections between left-and-right brains are closer, and more active in communicating. That means, for example, that they might connect a nod of the head or look in the eyes with an intent that men don't pick up on.

Several times since then, when my wife says "I don't trust that guy ..." about someone I might work with, I listen to her. And she is allways right.

We call it women's intuition, and I think that, physiologically, there is something to it.

I think it's similar with men. With testers, it's often more like Deja Vu - we notice that the software is doing something weird, but can't articulate why, and it's actually our subconscious that still remembers the SUPERMAN game we used to play on the Atari 2600, where you could win the game just by walking to the left at the beginning instead of going right - which was the action sequence that 'started' the game.

But we don't REMEMBER Superman.

So, I've learned to trust my gut. I would add that I think there is benefit to be gained from asking your gut "ok, why do you think that's true?" and active introspection. We might not always get answer, but it's usually helpful ...

So, I guess I'm saying I agree with you, if in a more mild extent .

Wednesday, January 17, 2007

Great Writing

George Dinwiddie wrote this and it struck me as worth quoting...

5. In spite of a technical background that goes back to childhood, my sailboat has virtually no electronics.
The depth sounder died years ago. I use a leadline. The knotmeter also gave up the ghost. I drop a potato chip off the bow and count the seconds until it reaches the stern. I made up a chart to convert that to knots. I’ve got a VHF radio, but it’s an old one that predates using frequency synthesizers, so it doesn’t have all the frequencies now in use. I’ve got a magnetic compass. And paper charts.

That’s it. No GPS. No radar. Nothing fancy, at all. Of course, the Chesapeake Bay is generally a pretty forgiving place to sail. I’ve only been in pea-soup fog a few times. So partly this lack of fancy toys is an expression of YAGNI rather than me being a luddite.

The main reason to sail without those aids is to experience more deeply using my own senses. I’m more aware of my surroundings because I’m looking around, rather than looking at a dial or screen. I feel how the boat moves through the water, how it responds to the wind and to the helm. And that’s really the reason for going sailing, in my opinion. If I just wanted to get to the other end of the trip, there’s faster and more efficient ways.

- http://blog.gdinwiddie.com

I haven't seen the Chesapeake Bay for nearly ten years. Oh, and in my mind, about five minutes ago.

What would it be like if more requirements - or any software engineering documents, for that matter - were written like that?

Standards - II

George Dinwiddie has a great response to my previous post on standards. In his post, George says:

Why do people want to choose a standard before they’ve tried something to see how well it works?
... (snip) ...
Processes, like software, don’t work well unless they, well, work. So get it working first, and then worry about whether you should standardize on it.

A short while ago I worked tangentially with a team that was implementing a business intelligence tool. The architect of that team was quick to point out that they were really implementing a business intelligence service - a process. In that case, it was important to get the process right. So, once the servers were installed, but before they rolled the software out to a customer, the team had to develop the methodology. They had to define the process to requesting a BI universe, for defining it, for scheduling it, for prioritizing. For each of these sub-processes, the team had to use a standard template. The architect was very clear that the template had to be filled out correctly, and there was some amount of back-and-forth over how to correct describe a methodology component.

They had a template standard. They had a standard standard.

And, eventually, the group threw it all out, choosing to instead actually run a project, then document what they did.

The odd thing that I have seen is that groups that do this often fail in a way that is predictable and repeatable - yet don't see the problem.

For lack of a better term, I will call this "Paradigm Blindness" - when you see the way you work as "right" and build defenses against it, instead of admitting that experiential evidence should guide the way we work.

With paradigm blindness, when you have documented test scripts and quality stinks, you say "Next time, we need to do a better job documenting our test scripts."

When the requirements process is a big, painful waste of time and the customer is unhappy with the results, you say "Next time we need to get the requirements right up front, before we ever right a line of code."

When the company keeps trying to increase the accuracy of it's estimates and failing, you focus on getting "more accurate estimates" instead of trying to deal with uncertainty as a real thing.

In the mean time, there's a quality guru named Deming and he came up with an idea called the Plan-Do-Check-Act cycle - where you do a little planning but then conduct an experiment to see if the standard will work, then check results, then move forward - often, you do these in cycles.

There's another name for this: The Scientific method. And when methodology proponents argue for processes that defy the scientific method as "right", you've probably got a case of paradigm blindness going on.

I've been seeing this more often (see the last few posts.) Does anyone have any ideas on how to deal with it? :-)

POSTSCRIPT: I was a bit worked up when I typed this. I hope it makes some sense.

A google search points out that "Paradigm Blindness" is apparently, a real phrase - not quite real enough to make Wikipedia, though. The best definition I found was "Paradigm blindness is a term used to describe the phenomena that occurs when the dominant paradigm prevents one from seeing viable alternatives." From this web site in Google Cache.

Tuesday, January 16, 2007

Standards?

To paraphrase Tom DeMarco, I am all in favor of product standards. After all, Product standards are the reason that I can buy a Double-A battery from any manufacturer, plug it into my CD Player, and not worry about compatibility. For that matter, it's the same thing with CD's, DVD's, VHS, and the players for each type. Sony, Samsung, they are all the same, and they interchange well.

Process standards - how to build the Double-A's - well, of those I am more dubious. The truth is, I really don't care how the two terminals of batteries are developed, as long as they are the right size and send the right amount of electricity at the right time. That's a big part of why I signed the Agile Manifesto - it explicitly says that we value:

individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

So, when people rush to standards, I hold back a bit. Having a standard means saying "We believe the value of doing things the same, every time, is more valuable than the value of the lessons we will learn by experimenting with new things."

My Father, Roger Heusser, spent thirty years as nuclear chemist and executive in the US Department of Energy. In other words, my dad is a scientist - he taught me to experiment, learn and grow. My mother, a music and grammer-school teacher, taught the same.

... and I've got a problem.

I have been teaching software testing to two corporate classes, about an hour a week, for a few weeks now. Before I came in, the group knew 'how' to test, or, at least, they how to use a handful of templates to produce some impressive documentation, and, along the way, to make sure the happy path of the software mostly worked, some of the time ... more or less.

I challenged that. In class, we did some empirical experiments that demonstrated that exploratory testing and documenting only the defects found resulted in more bugs found in less time. I also had students describe how much fun they had, on a scale of 1-to-10, and students that did exploratory testing scored much higher than the people who were assigned to create documented test scripts and then execute those scripts.

Now I am finishing up equivilance/bounds, moving into scenario testing and other methods. But there's a problem.

Both classes want a 'cookie cutter' way to do development. They want a standard. They had a standard, and I've taken it away by proving empirically that they can do better work themselves.

But they still want a standard! They want something to cling to!

Yet there are no easy answers. (Or, I suppose, yes, there is a quick and easy way, but it's not good.)

Eventually, I hope to give the class a very wide and broad tool box, from which they will select how much of which tool to use in the limited time they have to test products before release.

I have no standard, and I am okay with that. The question is ... how can I help make the class ok with that?

Monday, January 15, 2007

An Actual Job Description I saw today -

For a senior programmer/analyst type ...

Must possess a strong understanding of software engineering principles including data normalization, structured programming, and software development life cycles.

How about this: Must understand, evaluate, and demonstrate the inherent problems with the term "Software Development Life Cycle", and it's strengths and weaknesses in use as a model ...

What do you think?

Certifiable - III

I am drafting a reply to the agile-leadership group, but posting it here first.

Several people (including me), asked what problem certification solves, or who the "customer" is for the certification. To which Alistair Cockburn replied:

Back in message 416 or thereabouts I wrote: "In both cases, the market is so hungry for an agile PM certificate, that the market will snap it up (witness the response to the 2-day "Certified ScrumMaster" course)."

and in message 458 I wrote: "Looking at project managers in general, a company hiring PMs will follow two paths at the same time: they will send some of their existing PMs, or people about to become PMs, to PMI school to get their PMP certificates as part of general training. ..."


To me, this argument seems to boil down to: "That Market Wants It. If we don't do it, someone else will, and they will do a much worse job."

Please keep in mind, I respect Alistair and what he has done, and I mean this in the best possible sense, but there is just no easy way to say this.

"If we don't do it, someone else will" is not a reason - it is a rationalization. It is the same rationalization that people use to sell Crack, Heroin, and other drugs. And rationalization no reason, it is a logical fallacy - the fallacy of deceptive alternatives. (Logic and Rational Thought, Frank Harrison III)

Letting another vendor grab title-share is one alternative. Having the ALPN create the certifications is another. Education and Dialogue is a third.

My proposed solution is dialogue. It won't make me rich, or even popular, but that way, at least I have chance to be Good.

And that's a trade that I would make any day.

Regards,

--heusser

Housekeeping

I've had a couple (two) of spambot like comments posted to the blog in the past week. If it continues, I may have to turn off anonymous posting, or turn on post moderating.

I am loathe to do this, but I don't want to waste my reader's time either. For the time being, I am aware and monitoring it.

Friday, January 12, 2007

Agile Metrics - II

I've been having an off-line discussion with Jared Quinert that follows-up my post on Agile Metrics. Specifically, he noticed that I refer to the state a project is in to indicate that a project progresses in a way that is non-linear. For example: If the customers reject a build badly enough, it could bounce back to development, or even "needs requirements", which indicates that we built the wrong thing, completely missing the mark and not realizing it.

Jared saw my throughput metrics and had some comments; I thought they were worth sharing. (These are his words, used with permission)

You might have a bunch of different states your story could be in:

Needs       In Dev       Cust.
Req's                            Test
[ ]               [ ]                 [ ]

We can think of these as indicating a story's current state, and that maybe all three of these things are true of a story simultaneously. I noticed a tendency for people to look at the boxes arranged left-to-right and read:

Needs       In Dev       Cust.
Req's                             Test
[ ]--------->[ ]--------->[ ]

So instead of collaborating to work toward a completed story, it became waterfallish, with small analysis-code-test phases. Your spreadsheet reminded me that the wording can be important. Ours was worded more like -

Story                 Design             Dev             Tester
Complete      Reviewed      Complete      Verified
[ ]------------------>[ ]----------->[ ]---------->[ ]

...which prompted even more waterfallish behaviour, because the wording describes activity completion (or trigger events for a state transition). Even on a later project, the story cards were arranged in columns and pushed across the wall from left to right. I don't think this is a great model for what we're usually trying to achieve.

You've pointed me at some ideas for different visual representations that might get people away from their old mental models. I know that ultimately most of it comes down to values, but it's nice to know if there are 'tricks' which can help guide the team's thinking.

Choices ...

Imagine, for a moment, a retirement home. Crammed full of people, they are told where to live, what to eat, when to eat ... not even allowed to leave the building. If they tried, a monotonous-looking nurse in white would jump up and so "Oh, no, I'm sorry, you can't leave --- we do have a picnic scheduled for a week from Thursday, from 11:00-1:00, if it doesn't rain."

Even the terminology is clinical - doctors, nurses, and patients.

Under these conditions, people simply lose the will to live. Organs simply stop working, and healthy people become unhealthy. (Living in a cramped environment that bacteria loves doesn't help much, either.)

There are a few doctors working on alternative medicine systems that are different, but one common theme is choice. Build communities of houses, with six people or so to a house, and let our senior citizens walk from place to place as they please. Let them cook their own meals and decide for themselves what and when to eat, and include vegetables from their own garden if they want to. The theme is to restore dignity and choice, thus restoring meaning for life - thus restoring life.

Now thing for a moment about the typical Dilbert-like office environment. HR tells you where to sit and when to sit there. Appeals to management to change the environment are typically met with sighs of hopelessness - even though the furniture is a one-time four-figure expense, and staff costs in the upper five-figures annually.

Methodology standards tells us what documents to produce, and when to produce them. Requirements and specifications tell us exactly what to build, eliminating the chance that a do-er would implement anything else - even if he had a technical innovation that would take 30 minutes to code that could save millions. Architecture standards tell us what technologies to use; coding standards tell us how to code with those technologies. Thanks to SAS 70 and Sarbanes-Oxley, Auditing standards tells us all kinds of things about how we should do our job.

At one point, I gave this rant to a peer in one organization, and said "It's getting tot he point where all you get to do is decide to use a FOR loop or a WHILE loop."

He replied that "Yeah, we really should document the pros and cons of a FOR loop verses a WHILE loop, so we can created a repeatable way to decide which to use."

I'm not kidding. He wanted the process to determine which kind of loop to use.

Now, you have to have an HR department, and standards can be very helpful, but when they rob us of choice, something dies inside. If you've ever seen someone burned out and sleeping through a career, you known what I mean - it's not just a waste of a salary, it's a waste of a life.

I am not advocating that we go charge the HR department, or that we throw our carefully considered policies out the window. Instead, I am suggesting reframing those "rules" as guides, and living with a little bit of uncertainty.

Your mileage may vary, but I've been doing that for my entire career, and I seem to be doing all right ...

Thursday, January 11, 2007

Allmost posted this ...

I allmost posted this to my SW-IMPROVE discussion list, but it's a little bit too saracastic. I figured this audience could appreciate it a bit more for what it is, as a joke ...

Some_One Wrote:
Engineering takes place when you leave behind artifacts that communicate your design decisions and intent at a higher level than source code. Engineering includes scaffolding and tooling and jigs: items that you use during construction, i.e. unit tests.


And I allmost replied:
Joe, let's go back to physics for a minute. "Artifacts" add to mass. If it takes energy to move, more mass means that it's more expensive to move. When you have big changes after the initial design (and you always do), you either have to go update the artifacts to make them current (expensive), or leave them be (potentially dangerous when the new hire trusts the "design documentation.")

No, I'm not saying no documentation. I'm just advising documentation that is small, quick, and light. If you look at what I've been checking in for the past six months, for example, you see a lot more recorded .wav files and quicktime movies describing what I am about to build than word docs. I also write automated tests, which are an executable spec, and I agree with you on scaffolding.

To paraphrase Kent Beck, the few artifiacts that I do keep are small, light, and valuable.

Now, please, keep in mind, I'm not against artifacts. I'm not against a single thing that you said, actually - I'm just concerned that an inexperienced developer (or manager) will read a paragraph like that and miss all the little semnatic and contextual clues that come from being a skilled craftsman, and say something like:

"For software engineering to be a true engineering discipline, we need to generate artifacts ... "

blah blah blah. Gag me with a spoon.

And now, If you'll excuse me ... I gotta go code ...

Against Systems - II

I've been complaining that people like defined systems - even when they stink - because they are easy. In fact, many people don't even see the idea of craft in software testing, are completely oblivous to it, and view software testing as some clerical role, like counting inventory in a grocery store.

So, what I'm really saying is that many people are oblivious to The Good and I don't understand why. Example: You can require a template to be filled out as part of the process, but can you require it to be filled out well? How would you do that? A great deal of the process improvement literature is oblivious to the difference between performing a task and performing it well - oblivious to the good - to craftmanship.

I can't take credit for the idea; Plato wrote about it in book VII of The Republic. The story called "Plato's Allegory of the Cave", and you can read it here.

I made two attempts to combine this with "Against Systems", but can't come up with anything quickly. I'll let you think on it (hopefully) comment on it, and perhaps have something next week ...

Or maybe Blue Man Group. I dunno ...

Wednesday, January 10, 2007

Measuring Project Health

I've been known to say that explaining a single idea so well that it can't be misunderstood takes me about 1,000 words.

Well, yesterday, on the context-driven testing yahoo group, someone asked for sample software metrics, and, short on time, I quipped off a quick three or four sentence reply.

And, of course, I wrote something that may apply some of the time under some situations that was easily mis-understood, and the conversation quickly degenerated.

My Bad. Really, I should have known better. The original poster replied again, asking for ways to measure project health.

Ok, I'll take the bait again, but it's going to be on my blog, it's going to be an essay, and this time, I'm going to write the entire thousand words.


Measuring Project Health

There are all kinds of measures - both formal and informal. I am leery of formal, single-dimensional measures, as they are too easy too game, and last month I wrote "Against Systems" to try to explain why.

So, while there may be metrics like "estimate-to-completion" ratio for project health, you can also gather information about a project informally.

For example, a lot of my work has been with commercial teams in matrixed organizations. So a developer is a member of two "teams" - both the project team and the line team of software developers. When the project is under stress, certain things start to happen:

1) The developer's desk starts to get messy

2) The developer stops making commitments, and changes his language to passive-aggressive "I guess ... if you say so"

3) Personal-responsibility disppears from common use. People start talking about "roles" and "The Process."

4) The developer stops looking at you in the eye when giving status

5) When chosen by the team, tasks stop being measurable. "Put Foo into production" becomes "Finish researching BAR"

6) Status becomes ... hard to figure out. Sample status reports say things like "I got stuck and had to spend an extra seven hours on foo today, but I guess we are ok"

7) Teams start fighting each other by discipline. US Vs. Them starts to happen between developers and testers

8) "Hot Potato" starts to happen on project tasks

9) Builds start to go "over the wall" that are completely hopeless, where the install doesn't even work

10) People start talking an odd tone of voice. When you listen real carefully, you start to pick up on conversations you aren't intended to hear. And they are not good.

11) People stop talking in meetings

12) Fear - be it of layoffs or whatever - enters the workplace


... I could go on and on. The point is that communication stops happening and people get defensive under stress. So, when those things start to happen consistently to the entire team, there is a correlation. That does not imply causation. It's possible, for example, that the stock market just crashed or the most popular person on the team is going through a divorce or was just diagnosed with throat cancer.

Then again, in all three of those cases, the team is going to suffer a decrease in productivity, and, when you think about it, the team is un-healthy. If the team is un-healthy, the project probably is probably in even worse shape.

In my experience, this informal method, often called "Management by Walking Around and listening"(*) can be much more effective than any formal, gannt-chart approach to measuring project health. (Most of my experience is in small project teams of under forty people, often within a larger parent organization.)

Conclusions
Overall, yes, you can have formal approaches to measure project health. Jerry Weinberg, for example, has an excellent book on measurement titled Quality Software Management Volume II: First-Order Measurement. (QSM II) I even recommend the book - but wait ...

Before getting to QSM II, I first recommend three other books that describe informal systems for measurement and management - PeopleWare, Becoming a Technical Leader, and QSM I: Systems Thinking.

Once you've read those books, and given it some good thought, how do you measure your projects? And how should you respond once you have the measurements? Ultimately, like everything else, that's still up to you.

Good Luck.


FootNotes
(*) - For more information about gathering information in a formal vs informal way, Rands in Repose goes so far as to describe the two techniques as two different personality types, and suggests ways of dealing with each one here.

Tuesday, January 09, 2007

Craft Appreciation

An article on Art Appreciation that is really about Craft Appreciation and skill building - from From Paul Graham:


What counts as a trick? Roughly, it's something done with contempt for the audience. For example, the guys designing Ferraris in the 1950s were probably designing cars that they themselves admired. Whereas I suspect over at General Motors the marketing people are telling the designers, "Most people who buy SUVs do it to seem manly, not to drive off-road. So don't worry about the suspension; just make that sucker as big and tough-looking as you can."

...

It's not for the people who talk about art that I want to free the idea of good art, but for those who make it. Right now, ambitious kids going to art school run smack into a brick wall. They arrive hoping one day to be as good as the famous artists they've seen in books, and the first thing they learn is that the concept of good has been retired. Instead everyone is just supposed to explore their own personal vision.

...

In fact, one of the reasons artists in fifteenth century Florence made such great things was that they believed you could make great things. They were intensely competitive and were always trying to outdo one another, like mathematicians or physicists today—maybe like anyone who has ever done anything really well.

- http://www.paulgraham.com/goodart.html

It's the same thing in some areas of software testing - good has been retired. Instead, everyone is suppose to become stable, predictable, and repeatable ...

Monday, January 08, 2007

The Age of Reason

I just finished reading Measuring The World which is a wonderful historic novel about the world in the 18th and early 19th century. In many ways, the book explores the death of the Greek mythology and folklore and the beginning of science. Aristotle's earth/air/water/fire theory is replaced by the periodic table of the elements, Carl Frederich Gauss works on astronomy, and Alexander Von Humbolt destroys the idea that the earth is cold at it's core. The age of reason begins.

We are at a similar age in the world of software development. After being told for years by the gurus how we "should" do things, people are finally willing to challenge tradition and experiment - especially in the world of software testing.

To be charitable, let us say that I have a healthy skepticism of some of those gurus, and the lingo, maturity models, and certifications they often support. Instead of joining other causes, I am just about as likely to found my own - such as the Grand Rapids Perl Mongers, the Great Lakes Software Excellence Conference (GLSEC), or SW-IMPROVE.

When I do participate in groups, I tend to come off as an outsider. Perhaps it is personality, I'm not sure. I do note when a group has more than it's fair share of bright minds, and tend to pay attention.

You'll note that my blogroll, at right, isn't about companies, it is about people - people I have personally met who impressed me. A disproportionate number of them are active, vocal members of the Association for Software Testing.

When I found out that the board of the AST was, more or less, the same people, I took a little more notice.

... and, yesterday night, I sent in my $50.00 and joined. Not on the accomplishments of the group (it doesn't have any yet), not on the prestige of joining (uh, I paid $50, so what?) not on the benefits (no magazine, no discounts, no impressive certificates) but solely on my esteem for it's leaders.

As I said before, we are just at the cusp of an age of exploration in software testing. I really don't care about the name of the union I join.

I care about the quality of the people who are in my boat.

Thursday, January 04, 2007

Certifiable - II

Another post, this one to the agile-management list ...

Ron Jeffries Wrote:
A leadership certification, one would imagine, would perforce be far less specific and far more general, but just as likely to be interpreted as indicating that the certificate holder is qualified, surely more qualified than her colleagues around her who do not hold the certificate, to be chosen to "be a leader".

Yes. It's very hard for a certificate program to avoid becoming a Confidence Game in Software Engineering.

Not that it's impossible, but it is hard...

The guy at the front of the room, the guy who is the leader in the field, he is out *INVENTING* a new BOK (body of knowledge), not studying someone elses. I want to be that guy.

If you would like to assess me, read what I write, or talk to me about it:

http://www.xndev.com/articles/articles.htm


I realize that some people do not want to be that guy; some people don't like to write or speak, they want to do the same thing, over and over again, and do it well.

But if that's the case, is a certificate in "Agile Leadership" really the right fit?

Certifiable?

The Agile Project Leadership Network is working on a certificate program in Agile Leadership.

This has been discussed a bit on the Agile Leadership Yahoo group. I just emailed a reply and thought I would share it here ..

Ron "The Don" Jeffries Wrote:
>But rightly so. My lack of comments on the topic should be read as
>coming from a similar frame of mind, oddly conflated with "if you
>can't say something nice, don't say anything at all".
>

What I don't get is what pushes _everybody_ toward certification.

It's ... odd. It's like ...

1) Form a club (Association, Company, Institute, Take Your Pick)
2) Create a Code of Ethics
3) Run a Conference
4) Create a certification program
5) ?
6) Profit!

It doesn't matter if you are scrum or agile or a software tester, everyone seems to push in this direction. Aside from step 6, which, I must admit, sounds nice, I don't get it.

I went to seminar on perl certification at the O'Reilly Open Source Conference in 2003. The moderator was Tim Mahr; you can read his opinion here: http://minimalperl.com/consultix/publications/perlcert_article.html

His main talking point seemed to be that certification allowed the employer to reduce the risk of a bad hire -- if you hire a certified java guy, you know you are going to get a certain level of competence.

This seems, to me, to be like outsourcing the function of discernment.

If your managers of software development don't have discernment, well ... you've got a problem. Maybe instead of a crutch, you should be working on that?

Now, a certificate in Agile Leadership, used for the same purposes ... on it's face, that concerns me, but I just don't have enough information to tell at this point.

The ALPN feedback cycle is now apparently private; does anyone have any contact information, so I can find out more without mucking the waters?

--heusser

Interesting Presentation

Elisabeth Hendrickson just put up a reminder of her 2002 talk "Why Are My Pants On Fire?" - It is a quick but interesting read - here.

Please, go read the talk, then come back. I've got some spoilers, and I want you to develop your own opinion of the talk before I share them.

Again, the presentation is here.

Done? Ok.


Now Scroll Down.



Further ...




Further ...



Matt's Commentary Follows
I am of a divided mind about this talk. First of all, all we have is the PDF, so we don't know what she actually said. Elisabeth doesn't just read powerpoints, she tells compelling stories. So, reading the PDF is only getting part of the story. That said ...

On the one hand, I think it's a list of blindingly obvious things that you learn in an intro to quality 101 class. "Don't go into panic mode", "Don't reward people who do a crappy job and work late to make up for it", and so on. We all know this is true. Du-uhhh. Thanks, Lizzy, really.

THEN AGAIN ...

Very few companies actually _act_ like this is true. That is to say, we know it, but we don't do it.

In the past four years, I've known multiple companies that said "We gotta get us some of this quality thing like them Toyota Guys", so they hire a VP of Corporate Quality. Then he makes them mad by telling the company to change, it doesn't change, and ... six month's later, the VP is gone. Maybe he left, was fired, was asked to leave, it doesn't really matter much - he isn't there any more, and the company is still in crisis mode.

I have heard it said that the best moral teachers don't tell us anything really new; instead they remind us of the truth that is written on our heart.

So obvious or not, more people need to be giving talks like Elisabeth's; it is making us a more healthy industry. What's more, her talk was risky, gutsy, and hard-to-give - it would have been much easier to give some talk on "Five Core Metrics" or BusinessIntellegence or pick-your-buzzword.

I think she deserves a good bit of credit.

Finally, there is a risk in the PDF that she probably addresses in the talk, but we don't have the audio. When people say "Don't reward heroics, reward stability", they often lose track of the reality that many of the hardest, most interesting, and most profitable problems are unstable.

There are a number of math proofs, for example, that are hundreds of years old that will generate millions of dollars when they are solved. Around the world, math profs are slaving away at them. Ask a prof when he is going to solve an interesting problem like that, and he'll look at you funny.

I think having heroes is fine; you just want them to reliable instead of martyrs. Don't ask your heroes to work 80-hour weeks to take insane projects that are not well thought out and hit an unreasonable date. Don't ask them to solve organizational problems that are your responsibility. Instead, ask your heroes to reliably solve problems that normal people cannot solve, in a way that doesn't beat them out or burn them up. (I suppose this needs more thought; more to come.)

Decline and Fall - III

I like the way my colleague, Sean, summed it up yesterday. To paraphrase what he said:

Asking "Ok, but how will the decline and fall help me test this web-based app?" is probably the wrong question. It might not help you develop test cases or scenarios. What it will do is help you figure out that the app is completely unnecessary, redundant, the wrong solution ... or maybe that no process is in place to update the database after it's done.

It's more holistic in nature than testing as "breaking an application"; it helps sharpen you to do critical inquiry and investigation into anything.


... And I think that's a wrap. Back to requirements ...

Wednesday, January 03, 2007

Requirements Methodology

If you listen to James Bach enough, you will eventually hear the idea that every practitioner should be able to articulate his strategy and methodology - preferably quickly - and defend it's reasoning. Bach is generally talking about testing, but I'd like to talk about my philosophy on requirements for a bit.

Now, I view software development as craft, so I don't think it is possible to create a "three steps to success" or "requirements in 15 minutes" book or blog post. If you want that, Google for it, you can find it, but I doubt it will be very good. Instead, I will try to create an overall theory on requirements - a work in progress - that should provide the principles and tools to help someone improve his or her skills.

I don't want to tell you what to do; I want to provide tools to help people figure it out for themselves.

Without further ado, here's my elevator speech on requirements:

1) Requirements are hard
2) Requirements are like Othello
3) The goal of requirements (the why) is to create a shared understanding of what will be done by decreasing ambiguity
4) The Requirements work isn't done until the project is done, but the requirements phase can end when both:
A) The team agrees that what they have now is good enough to start real work
B) The Ambiguity Ratio is a price the customer is willing to pay. (In other words, the development staff thinks they can deliver the project in somewhere between six months and three months, and the customer is willing to deal with that uncertainty. If there are not, then there is more work to do.)
5) Requirements are not gathered; they evolve
6) To improve requirements, you need feedback. Faster feedback removes ambiguity more quickly, making requirements get "done" faster.


A few consequences of these ideas ...

1&2 - These say that requirements are a craft that can be honed through repitition - just like skiing or golf. Sure, you can watch a golf instructional video or read a book, but to get good, you have to play golf - a lot. One thing that I think can help is simulations and learning exercises.

3 - If you write a complete, comprehensive, correct requirements doc and throw it in a drawer without team buy-in, you have just wasted an incredible amount of time, and probably killed a tree or two.

4 - Some people expect requirements to be perfect. Others expect them to be vague and fluffy. Some use them to show the customer "look, we did what you said" afterward. Some use them to know what to do during the project - what to develop, what to test. Different people have different expectations, desires, and uses for requirements. Ultimately, the customer has to decide if the requirements are 'good enough'. (Still working on this one; more to come.)

5 - Fred Brooks wrote thirty years ago that we should "Plan to throw one away; you will anyway." What he was trying to do was recommend that we use prototypes to gather feedback into a developing system; otherwise you risk throwing the entire system away. If the requirements are developed in a big-bang, no-feedback, up-front way, it is just as likely that the eventual system is acceptable, but the thing you have to throw away is the requirements document - and that is a formula for all kinds of problems.

5 Again, and 6 - This makes me predisposed toward feedback quickly, using fast requirements techniques like the planning game in extreme programming. Also, the people who will be doing the work need to be involved in the requirement process. (More about that later, perhaps)

PostScript I: Jerry Weinberg has an interesting book on requirements. It doesn't give Pat answers, but instead explores the general concept, helping to develop a craft approach. Mostly, he provides suggestions to experiment with. It's thick and a bit tougher than his usually stuff, but I found it well worth the investment of my time.

PostScript II: I will be giving a short talk on this on March 13th, from 11:00 AM-12:00AM, at Ferris State University in Big Rapids, Michigan, to the local student chapter of the Association of IT Professionals. If you would like to attend, drop me a line.

Connecting Gibbons and Systems Thinking

I was rather hoping that someone would respond to yesterdays post with:

Yes, Matt, that's all well and good, but you still didn't answer Harry's Question - what does the Decline and Fall of the Roman Empire have to do with software testing?

No such luck, but I'm going to forge ahead anyway.

But first, I'd like to discuss general systems thinking. At the Agile Conference this year, one of the speakers, Peter Coffee, discussed the book Systemantics. It was an excellent keynote; you can download the audio here.

The author of systemantics (yes, it's a play on words) proposes a few simple rules:

1) Complex Systems that are designed (by humans) in a big-bang manner don't work.
2) Any sufficently complex system that does work evolved.
3) Complex systems that grow by evolution are incredibly inefficient. When you find a working system, it is only working because some sub-section is compensating for another system that is, in fact, failing.


When you stop and think about it, this makes incredible sense. Why do humans have two eyes, two ears, two lungs, and two livers? In case any one system should fail, it won't kill you. So you need redundancy.

The space shuttle, when it was originally proposed, was supposed to be this incredibly re-usable launch vehicle that could do a mission a week. Yet we have five of them and do a couple of missions a year - because the thing was developed as the big-bang following the capsule system, and it's error prone.

Why is it that in any software development shop, some department (dev, test, analysis, PM) is always failing and some other department has to take up the slack? Because that's what happens in large systems that evolve - like a business.

The IRS Tax Code is entirely messed up, but it seems to mostly work well enough, enough of the time. It evolved organically. Marx's Capitalism, as espoused by Lenin and Stalin, sought to do total re-distribution of wealth in a big-bang fashion, and it failed. (Then again, Marx, Lenin and Stalin had other problems.)


What does this have to do with Gibbon's work, The Decline and Fall of the Roman Empire?

Gibbons gives insight and examples about one particular system - an entire culture - and how it fell apart.

In the past half year, when I find a system that is overly complex on it's face - a system that simply can not work because it is too confusing, I refer to the system as "Byzantine." "Byzantine" is actually a reference to the Byzantine empire, what was left of the roman empire after Rome fell. The Byzantine empire was complex and had an incredible number of government roles that provided a salary with no actual work. Just figuring out who had the authority to grant you what you wanted probably involved a number of meetings, bribes, and letters.

When I call something "Byzantine", it usually means that it should be scrapped entirely and started over. I got that from Gibbons, and I would like to close by quoting Weinberg's book on requirements:

Many psychological experiments have demonstrated that meaningful information is easier to recall than meaningless information. For instance, computer programmers can find errors in code by trying to memorize code sections and then recall them from memory. The parts that are hardest to remember are those that don't make sense, and are in fact more likely to contain errors. The same is true of parts of requirements documents. - Pg. 92

... And the same is true of human systems. If you want a tool to help you find "Bad Smells" in code, try Martin Fowler's "Refactoring." If you want tools to help find "Bad Smells" in systems, you can check out The Decline and Fall or the latest version of John Gall's Systemanics.

If a big part of testing is applied critical thinking, then I think finding "bad smells" counts.

Tuesday, January 02, 2007

A few of my favorite things - IV

As a thought experiment, I would like to describe nation at a point in time. I'll let you figure out the nation and time ...

-> The country was once, uncontested, the most powerful in the world
-> ... But it became embroiled in foreign wars
-> Became dependent on foreign military assistance
-> Created an oppressive tax structure
-> Increased the rift between the haves and the have-nots
-> Moved from a manufacturing economy to an entertainment economy
-> Saw a decline in family values and traditional morality
-> Saw a decline in math and science education


No, I am not talking about the United States of America in the 20th Century - I am talking about the Roman Empire in the 1st, 2nd, 3rd, and fourth centuries.

Actually, if you enjoy little compare/contrast exercises from history, you might enjoy Gibbon's "Decline and Fall of the Roman Empire." Although the wording is long-winded and archaic (it was printed in 1788), the information is very insightful.

If you want the dessert without slugging through thousands of pages, there is an Gibbon's abridged version on Amazon that you can read on a plane ride.

Why is Decline and Fall one of my favorite tools? That's a tough question. A few months back when I was creating this list and sharing it with a few close associates, Harry Robinson went as far as to challenge me, writing:

A question: are those really your favorite testing tools, or do they mainly make for quirky talk material (especially the Gibbons)?

I suppose that is a resonable concern. There are plenty of testing "experts" who are all flash and little substance. Then again, what do tools do, if not enable you to do better work in less time? So I suppose a valid question would be having read the decline and fall, what would you do differently?

Start out with me worried about the state of my nation, and it's future. Decline and Fall paints one way that the fall could happen. The decline in math and science education is especially worrisome to me. Yet when I look at the formula for our educational system, which is a lot of low-to-middling paying jobs that are provide considerable job security, high student-to-teacher ratios, heavyweight teaching methods established by a federal bureaucracy, "objective" standardized tests that teach facts and not evaluation ... I suppose I shouldn't be surprised.

I don't see an easy answer. Next year, my elder daughter will be KinderGarten age, and she will probably do that at home, which will probably continue throughout elementary school.

Home Schooling? Really? Five years ago, that was not even on our rader. Thank you, Henry Gibbons.

Monday, January 01, 2007

Run Corporate IT like a software company

I'm still on vacation, but this struck my eye as worth sharing ...

... corporate IT has to find ways to deliver the most important business capabilities as quickly as possible and as cheaply as possible. When the rubber hits the road, this capability is the only one that matters. Your business partners won’t ask if the back-office design is scalable. They won’t care if the requirements are thoroughly documented or the test traceability matrix is complete. When they compare your IT department to a pack of starving programmers in China, their decision is going to boil down to a simple question: who can deliver the things we need the most the fastest?

--- From the Tech Dark Side: http://www.techdarkside.com/?p=61