Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Wednesday, December 31, 2008

"Correct" Requirements

Warning: This is one of my, how shall I say it - more "opinionated" posts. It is geared toward a specific audience; I posted it as a reply on the Agile-Testing Discussion list.

Concepts like "right" and "wrong" requirements "before design can begin" assume that:

(1) The customer can speak with one voice,
(2) The customer can know what he wants without first seeing an example,
(3) The customer never changes his mind,
(4) The market never changes: The "right" requirements last week, last month, or next year are all the same.
(5) Design and Requirements are two separate and distinct processes that do not feed each other. That is to say, it is not possible, or at least not desirable, to innovate on the specification with interesting design ideas. (For example: "The spec says do A,B,C,D which might take a year. But just A,B,C we could do in two months. Can we do just A,B,C, deliver it, and see if we need D at all?")

Now let me ask: for any given project you are working on, are statements one through five actually true?

Monday, December 29, 2008

My Agile 2009 Proposals

Bit by bit, the Agile movement grew until it was massive. So now you have people - thousands of them - eager to learn how to develop software the 'agile' way.

And testing is part of that.

I can either choose to be a part of that discussion, or, well ... not. So I am wading in, proposing several sessions at the Agile 2009 conference in Chicago. That process is interesting because it is iterative - you propose sessions, get feedback, and have an opportunity to revise your proposals before they are accepted or rejected.

So I'd like to tell you a bit about my proposals. The abstracts below include a link to more material. To follow a link, you'll need to create a free account at the website. Once you have an account and have logged it, you can follow the links to read detailed outlines for each talk, and comment or review talks if you desire.

I would even go so far as to say that if you want a quick primer on the state of agile, you might just want to peruse the proposals.

Next Generation Test Workshop
Level: Expert
Stage: The Agile Frontier

Pre-defined acceptance tests, TDD, Mocks, BDD, ATDD … which is right? Matt Heusser argues that testing should be a strategy used to meet the needs of your business at this point in time, and that that strategy needs to evolve over time. This workshop will combine experiences to discuss what actually testing approaches are working in the field, and why, and may will hope to create advice for teams adapting to agile-testing … or trying to take things to the next level.

Adding Exploratory Testing to your agile process

Level: Intermediate
Stage: Testing

What is exploratory testing, and how do I fit it into my Agile Process? Matt Heusser introduces exploratory testing as a discipline that can complement and extend other forms of testing. He’ll discuss what exploratory testing is, the problems it solves, the kind of bugs it is good at finding, and how it might fit into a portfolio of test strategies. Students will leave with a variety of exercises and concepts they can use to explain exploratory testing to others, and sharpen each other skills.

How do I do this 'Agile Testing' Thing?
Level: Introductory
Stage: New to Agile

Agile methods view testing as something that can happen continuously, throughout a project, often before coding even begins. In addition, the Agile concept of testing is a much larger tent, as the benefits spill out to include better requirements, better communication on the team, and design benefits for the technologists. This means that more groups want to get different things from software testing. Matt Heusser presents one way to do it in practice, based on his experiences at Socialtext, Priority Health, Open Source, and community involvement.

Technical Debt: Beyond Cliches
Level: Practicing
Stage: Coaching

Like any addiction cycle, Technical Debt is hard to break because it provides absolute and certain benefits today with a deferred and uncertain future pain. So how can we prevent, or stop the merry-go-round of technical debt?

To answer this question, Matt Heusser organized the Workshop on Technical Debt, which was funded by the Agile Alliance and ran in the summer of 2008. This presentation will cover some lessons learned from the workshop, combined with Matt’s own personal perspectives and experiences on the issue.

So there you have it. I am vaguely considering a proposal on massively distributed agile teams, perhaps for the 'frontier' stage.

I am interested in your feedback, both at and here. I believe the information I have to present could complement the current agile literature well - and that we could possibly move things forward a bit with the workshop - and, yes, that I could learn a thing or two.

Hope, after all, spring eternal.

More to come.

Monday, December 22, 2008

New Testing Challenge - V

(The next installment in the New Test Challenge Series)

My next steps on the testing challenge depend on the reaction I get to the first attempt.

Let's say I get failures accross the board. Massive failures, but the manager is overwhelmed. He meets with me and feels genuinely chagrined that this could happen.

Well, I'd start working on a recommendation for a training plan for management, which would cascade into the cashiers. I'd also suggest a program of check-behinds or spot inspections, so we could find defects earlier and create a positive feedback loop. I'd recommend a re-inspection after the team had time to actually correct it's mistakes.

If I had lots of defects and an obstinent, blocking store manager, well, I wouldn't try to inflict help. I would prepare a report to management, which I would deliver personally with a story, and /try/ to give them feedbac that was a positive as possible. I would also try to work with the manager instead of against him.

What if there are only a few mistakes - or none? Then we could use the time allotted to do some of the complex tests that people like Jay Phillips have recommended - buying non-alcoholic beer while under age, or checking for items that have no tag - maybe putting a clearly wrong tag on an item to see if I'm caught, buying four times at the "five for a dollar price", and so on.

That is to say, once I'm sure that the basic business rules are in place, I would check the implicit ones that are rarely defined.

But I'n never really sure those rules are in place - my first scan was only a few random times. If I had the money, we could do more sweeps, for example, off-hours (when the part-timers are on shift), or when the lead cashiers go on break and the baggers run the register.

Besides testing, I would be interested in the training and reinforcement mechanisms that were in place, and I would also be interested in headquarters margin for error - to help determine if I believe the /process/ was capable.

Remember, this is the 1980's - back when it was the cashier that needed to know the rules, not the scanning software. It may be, for example, that the process is capable, but Weiss's simply does not pay enough for the kind of quality talent it needs to remember and enforce those rules. Oh, we can create cheat sheets and reinforcements, but a pay scale that encourages retention might go pretty far.

I think that about wraps up the challenge for me - but if you have questions, leave comments, we can keep iterating on this forever.

What's Next?
Besides the occasional request for comments, I try not to ask for too much too often from my blog readers. If you enjoy Creative Chaos and think it has value for a larger audience, please consider registering as a reviewer for the Agile2009 Conference and reviewing my proposals. (To review, you'll have to create a free account and search for Heusser.)

If you're looking for some cutting edge ideas on the cutting room floor, reviewing Agile 2009 proposals is one way to see them. You see, the Agile 2009 submission process is entirely open. In theory, submitters get feedback they can use to improve the submission, along with feedback, for the momth of January - then, based on voting, the submissions are accepted or not.

Creative Chaos readers are a part of the "in theory." At least, I hope so.



Thursday, December 18, 2008

What has Matt been up to?

Well, a lot, but blogging hasn't been among them. Here's what's up:

The December issue of Software Test&Performance Magazine is out. You can download the issue here; our column, which is on test automation, begins on page 7.

I've joined the software-craftsmen Google group. I believe the Craftsperson movement has two great potentials: One, to improve the training of new developers, and two, to create social systems that could lead to better software and less technical debt over time.

I'm starting to work on proposals for Agile 2009 and have them in rough draft form. If you'd like to review them before I submit, please drop me an email: I'd love to hear from you.

This week, O'Reilly approved the Beautiful Testing book project, which Adam Goucher asked me to contribute a chapter to. No, I will get no royalties; all royalties will be donated to purchase mosquito nets for rural Africa. And, no, I don't have a contract yet. Keep your fingers crossed.

Finally, and most importantly, I am working my tail off for Socialtext, who just released a new, free project to create social networks for people recently laid off.

So there you have it. A free magazine as a Christmas present, a request for peer review, a potential book project for a good cause, and lots and lots of software testing.

Want to get involved in any of it? Drop me a line. We can always use reviewers for the March Issue. And April, and May ...

Thursday, December 11, 2008

SideBar - II

The test challenge is my attempt to explain that skill, practice, and critical thinking - the things no defined process can create - matter in software development - in the same way they matter in Golf, Music, or Art.

That good testing, and good thinking, can be learned. I've only been doing this kind of "personal process improvement" for the past ten or eleven years - and talking about it publicly for even less.

When I want to talk about the joy I get from doing a project well, words fail me. Oh, I can try. This blog is filled with the attempt. And when people come along and say it better, I feel obligated to point out it.

This morning, my colleague Chris McMahon put out a twitter link to a video about skills improvement - and attitude - taken from Music, that I thought was absolutely wonderful. Here's the link.

When we talk about focusing, practicing, reflecting, and experimenting in something like Golf or Music, the ideas have common acceptance. Let's do the same with software.

More test challenge to come.

Tuesday, December 09, 2008

New Testing Challenge - IV

So let's see here. We have a simple test project on it's face. Management hired us to figure out if the store if following a set of rules.

I hope you agree this is a real, critical thinking test problem. If we substitute "rules" for "requirements", we find this is very similar to a real test problem.

First of all, just because one cashier follows the rule is no /proof/ that the cashier will do so on the next transaction. Nor that the cashier next to her will do so.

For that matter, a typical grocery store has thousands of items. So finding out that ten or fifteen ring up at the correct price doesn't give us proof that the rest will be correct.

The best we can do is to say that the the process is capable of success - a term I borrowed for auditors a few years ago.

So what would I do? First of all, I'd walk into the store and look around. Yes, look around. Are the floors clean? Are the items stocked correctly? Are the lowest-paid workers, the baggers, wearing clean clothes with shirts that are tucked in? When the lines are down, do they go bring in carts from the parking lot, or do they sit around?

Then I'd go get my three cousins, Joe, who's 20 and looks 23, Billy, who's 21 and looks 25, and Sally, who's 22 and looks 18. Together, we would:

(A) Have Joe wring up a modest set of snack food and "forget" his id; she if he can buy beer. Have him use a different cashier, give his ID, and she if the cashier notice he just turned 20 last week and is 20, not 21. Try to argue with her about the math is she says 'no.' (Also, check the snack food for pricing)

(B) Have sally try the same trick with cigarettes.

(C) But a set of regular groceries, including items on sale, one from each major department. Carry a calculator with me, write down what I bought and the correct amount. Also buy two each of the items that should not be taxed (staples). Go to the checkout counter, and see if the correct amount is subtotaled, correct*0.05 is charged for tax, and sub+tax = total.

(D) Have the cousins try (C) on different shifts with different cashiers.

(E) Each of my and the cousins would try to buy beer at 6:10 AM, noon, 9:50PM, 10:10PM, and on several times on Sunday. We'd all try to buy non-food items on Sunday

(F) Clarify what tax should be applied on things that cost $0.19 (I think nothing) and $0.89 (I think four cents). Buy 100 items at $0.19 each, expect to pay $0.85.

(G) I'd try to buy cigarettes and beer; back then I appeared to be 45 years old. (Yes, I got younger. Hey, it's my story.)

I'd also try to get an appointment with the manager and see if we could have a friendly chat about the inspection, instead of doing it in a clandestine, us-vs-them way. I'd ask what systems he's put in place to educate and train the workers.

At this point, I'd have three types of feedback:

#1: I'd have looked and paid attention to the store (touring). There may be problems that senior management needs to be aware of outside the initial test. Also, a sloppy environment would tell me to spend more time looking carefully at the business rules themselves, because /some/ people at the store aren't doing their jobs.

#2: I'd have performed a first, sloppy pass at the initial requirements handed to me. If the store passed all those, I'd breathe a sigh of relief and start to wonder about the cashiers we didn't test. If some failed, I'd start looking and what failed to consider what to test on my next run.

#3: My walkthough or inspection with the manager would give me some idea of what the risks and weaknesses of the store might be - and it's strengths.

I would take the results of my initial round of testing and plan the next round. We can talk about that next time.

In the mean time
That was my sloppy first pass, and it's an honest first pass. Oh, perhaps I've given it more thought because I had to in order to /write/ the test, but it's not any kind of perfect, down-from-the-heavens answer.

Now's the part where you come in: You get to tell me what a terrible test it is, and what I've missed. Comments, anyone?

More to come.

Process over People - I

More testing challenge /today/. This sidebar will not continue to interrupt our regular broadcast. That said ...

BY THE WAY: If you've been reading Creative Chaos for long, you know that I struggle with the over-value-ification of prescriptive process. I've published articles on it and blogged on it extensively. Then, every now and again, someone will just completely nail it. My friend David Christiansen just posted "The FSOP Cycle". It's brilliant. Go read it. Now.

About the same time, Paul Graham published "Artists Ship", which covers some of the same ground in an amazing way. These are send-to-your-boss level articles, and impressive ones, at that.

This morning I was twittering with Ben Simo and he pointed out, again, that over-respect for process dis-respects people. I do think I have the quick soundbyte of why that is:

"The process" says the next step is X. You have a better idea. What does "the process" tell you do? Do X anyway.

In other words, the process doesn't trust you.

Now, a process expert will tell you that the process is insurance. You pay a cost for a check now and decrease the damage that you'll get burned later. And, things like testing and inspections can indeed provide some value. Mandated steps, however, insist that someone who wasn't on this project thought would be a good idea last year and wrote down is the best choice right now.

If process in insurance, check the cost of your premiums. It might be better to downgrade from comprehensive coverage to just liability.

Monday, December 08, 2008

Barriers to Agile Adoption

I just posted this to the agile-testing list; I thought it was worth sharing. More test challenge to come!

//Begin Post
I have seen many organizations try to embrace agile, compromise, and fail. ( See "Big Agile Up Front") That's not a failure in Agile; it's something like what /I suspect/ Ron would call "We tried baseball, and it didn't work."

As I see it, several problems mash together here:

(1) It turns out the problem of customizing a methodology introduces a lot of unintended consequences. And no, doing it "by the book" won't eliminate the problem; instead you'll get someone /elses/ intended consequences. Will those match your values? Who knows?

So tailoring knowledge work requires a great deal of abstract thinking, modeling skills, experience and team buy-in - which is what I try to focus on in my work. Yet few authors/speakers address the space of customization and dealing with change in a meaningful way. Of, you've got Diana Larsen and Dale Emery and a few others, but I think there is opportunity here for us (as the agile community) to do more.

That sounds like a great proposal for a talk or Six at Agile 2009.

I'm just saying ...

(2) Many organizations live in areas that are not next to a world-class CS school, peg salaries at 50% of market average /for the area/, have work that isn't all that interesting, and won't pay for relocation. I'm not sure I have a fix for this; they've designed their own kind of special cocktail. A superb kind of technical leader can /help/ pull and organization like this out of the mire, as can superb individual contributors.

My fix for this is invariably, to offer telecommuting, and you can hire the world for a fraction of the cost.

(3) North America is locked in the same command-and-control mindset that won us WWII and made Ford a Billionaire. It's kind of hard to fault that. The only problem is that way of thinking is about fifty years out of date and can no long compete globally.

(4) Most American employees work inside a system that rewards certain types of behaviors. When people behave in the way they are rewarded (or have been in the past for years), can we really blame them?

These are just a few of many barriers to agile adoption, just as GM and Ford and Chrysler have struggled to adopt continuous improvement and anything like the Toyota Production System on the assembly line.

Or, to quote Lee Copeland: "If you measure the wrong thing, and you reward the wrong thing, don't be surprised if you get the wrong thing."

How we deal with that is up to us.

UPDATE 1: Hey, I'll be in Detroit, Jan 14th, speaking at the Great Lakes Software Process Improvement Network on just this subject! You can't beat it with a stick!

UPDATE 2: Personally, I find it easy to trade off my personal process, and I find it pleasurable. Above, when I say "freakishly hard", I mean the cognitive process of taking something like Scrum or XP, adapting it to an organization's context, making compromises to suit the VP of Sales, the VP of operations, and the director of engineering - then trying to roll it out to a large organization. The unintended consequences of those tradeoffs tend to get you. My ideas on alternatives are what I hope to contribute to the panel discussion at GL-SPIN in January.

Friday, December 05, 2008

Sidebar -

Before finishing off the test challenge -

The December Issue of Software Test & Performance Magazine is out. The theme is on test automation, and our monthly column is on fundamental issues in test automation. IT's on page seven; you can download a free PDF here or subscribe here to the print edition.

If you want to see more announcements of this type on this blog, (word on the street is that James Christie has an article in this quarter's testing experience magazine) - let me know through comments. If you don't, let me know.

Next time: Comments on your comments, and how I would approach the testing challenge.

UPDATE: I read James Christie's article. It's good - especially so if you work in a large, bureaucratic organization trying to follow the (*cough*) v-model. (at least, trying to follow it on paper) :-)

Wednesday, December 03, 2008

New Testing Challenge - III

I addressed some clarifying questions yesterday, but I thought of another one: What kind of answer are you looking for?

Most of the answers we've had so far have been comprehensive - "here are all the test cases I would run." Those tend to take a very long time to do, and several people have replied to me with something like "oh, I don't have time."

Well, that's fine. Two other kinds of answers could be short "here's a summary of the areas I'd look into" or immediate "here's what I would do /first/, and I would use feedback to guide my direction."

If you give a short answer, I'm not going to say "aha! You missed this!" that would be counter to my intentions and not be to your benefit.

I'm looking for a wide variety of answers. Summary, Immediate, or Comprehensive are all fine. One more chance ...

UPDATE: Alan Page's "How We Test Software At Microsoft" is supposed to be available by pre-order from Amazon today. If you want something to buy your boss for Christmas, you could get either that or Weinberg's "Perfect Software And Other Illusions of Software Testing", which is in stock right now.

Tuesday, December 02, 2008

New Testing Challenge - II

Have you ever played pac-man? In pac-man, there is a simple, ostensible game: Eat the dots before any of the ghosts get you. However, there is a hidden game - a game-within-the-game - that is - figure out what makes the ghosts change direction. If you can do that, you can 'trick' the ghosts into going the wrong way, and, suddenly, the regular game gets much easier.

Software Development has it's own meta-games: Good project and product management can help us define success in more concrete terms, which makes it a lot easier to hit between the goal posts.

Does testing have a meta-game? I think it does, and I think it can be explained, described, and trained on.

Last week I posted a New Testing Challenge. On first blush, it's a simple problem: Make sure that the following list of business rules are enforced at a local grocery store.

I to be super-tempting for the reader to dive into problem-solving the surface problem. Why, first off, you test each of the business rules individually. Then you look for interesting bounds conditions, then you look for combinations of rules ... and so on.

And there's nothing really wrong with that - it's the eat-the-dots problem. Sometimes, you'll be presented with a "just test it!" sort of statement in the field, and knowing how to generalize to explicit rules and test those rules (and combinations) is a genuine skill. I would argue that when pushed to "just test it!", a good exploratory tester can usually come up with a dozen quick, easy-to-run test cases ideas, to make it look like they are testing, to buy time to figure out the meta-game.

Yes, the Meta-Game. The Meta-Game is figuring out who the customer is, what matters to him, and how much time, effort, and energy he is willing to invest in testing.

Knowing the customer helps you figure out what to test.
It helps you triage test ideas.
It helps you come up with test ideas.
It helps you know when to stop.

So, when presented with something like my testing challenge, a meta-game player will trying to engage someone - preferably the customer or product owner - in a conversation, along these lines:

"Who is the customer?"
"What problems are they worried about?"
"What is your expectation of 'good enough'?"
"How much time/money do you want me to invest?"
"Tell me about the project?"

Notice I say a conversation - because the customer may have uninformed expectations about estimates and quality expectations. It'll be a good bit of give and take - probably more than one round of it. I was very pleased to see Michael Bolton as the first responder, but even more pleased that he tried to actively engage me in the meta-game. I'll try to generalize the answers for everyone's benefit below.

The Answers
In the story below, you've been brought in as a consultant by Weiss's Management. They have recently had a number of complaints about just one store - the one in Frederick, and gave management a month to clean up the ship before calling you. The complaints are around the laws. One person noticed the store selling beer on a Sunday and cigarettes to minors, another complained because could not buy beer at 9:30 (something about daylight savings time and a clock), a third complained because he was charged tax on bread. A fourth complained that the store would not honor a sale price listed in the weekly circular.

Management wants to get these complaints resolved before they escalate to the newspaper or worse, the state attorney general's office. You live about six miles from the store, and have about a month to provide a written summary of known, systematic training issues along with recommendations for corrective action.

They expect you to work on this part-time, about 10 hours a week; your total budget is $1,000 US Dollars. (This is about $25/hour, which, in 1982 dollars, is reasonable consulting rates for a grocery store.) You live about 2 miles up the road from Weiss's and can also expense mileage.

What to do tomorrow
Ok, I've given what I believe is a good description of the problem. We could have some more give-and-take conversation, but I haven't intentionally left out any spoilers. You've let the Avante-Garde Michael Bolton types take a swing at this. Go back to Maryland in 1982. You just got out of your car and are headed into Weiss's Market. (Or, you can be at home planning, it's up to you.) What would you do?

If you look at the initial blog post as "requirements" to be "tested", I would say my initial description of the problem was probably at or above industry average. But it did not tell you any of the why that would guide your testing. Many such documents lack this context - and notice that you can not have a conversation with a document.

The way any organizations do development, it's hard to impossible to play the testing meta-game I've described above. That is just sad. The ghosts'll getcha every time. Or, to put it more plainly: without the meta-game, we are much more likely to find a bunch of bugs that don't matter and get called into the big meeting where the boss says "Why didn't QA find (the one that did matter)?"

Friday, November 28, 2008

A New Testing Challenge

I've been spending a good deal of time lately thinking about the cognitive process of testing: What makes a good test case? How do we "know" that the software is fit for use? And why is that an area of such little interest in many test communities?

One thing that can help are examples that we can draw rules of thumb from. The typical example is the triangle problem, where we have certain rules for types of triangles, an input for sides, and try to figure out if the software is calculating the correct type of triangle.

The triangle problem is a great example -- as long as you did well in Geometry in high school. I've been in search of other examples that don't require telling a scalene from a isosceles.

That said, you can test anything that consists of rules and examples. So here goes ...

Picture It: Maryland, 1982. You've been hired as an auditor to check to see if Weiss's Grocery Store in Frederick is enforcing the sale laws correctly. You turn on your Plymouth Horizon and leave Middletown, taking Route 40 east, over Braddock Heights - in the distance you can just barely make out the famous clustered spires of Frederick.

... and then you're turning into the Weiss's parking lot. Before entering the store, you grab your clip-board and review the rules:

Weiss's five basic types of products:

A) Staples - Break, Milk, Cheese, Water, Flour, Sugar, Juice)

B) Non-Staple Food such as Soda Pop, Ice Cream, Frozen Food, Candy Bars, Everything else

C) Alcoholic Beverages like Wine and Beer

D) Cigarettes

E) Other items like toys, magazines, pencils, paper cooking supplies, and so on

The following laws apply:

1) Maryland State Sales Tax of 5% applies to all items except staples
2) Only food items (staples and non-staples) can be sold on Sundays
3) All times must have a white tag that lists the price of the item; that price must be correctly charged by the cahsier
4) Listed prices in the weekly 'sale' circular must be displayed with a yellow that; that price must be correctly charged by the cashier
5) You must be twenty-one years of age to purchase alcoholic beverages; the cashier should card you if you appear to be under thirty-five
6) Alcoholic Beverages cannot be sold between 10PM and 6AM
7) You must be eighteen years of age to purchase cigarettes; the cashier should request identification if you appear to be under thirty

Your job is to ensure that the correct laws and taxes are applied, and items are sold for the correct amount. (If you really want to be testing software, imagine that the rules must be applied through self-service checkout stations.)

Challenge: How would you test this? Defend your answer.

UPDATE: Yes, I welcome clarifying questions, and NO, you don't need to test the website. I've linked to that solely as background information. The test is theoretical and set in 1982.

Wednesday, November 26, 2008

Life is production issues

I do want to explore quality, and uncertainty, and ROI.

For that matter, here's a story I'd like to explore: On November 7th, we ate at Pizza Hut, and I had a problem with the boneless wings, in that they did not have any sauce on them.

That's right: Regular Buffalo wings have sauce; Boneless Buffalo Wings do not. They are like Chicken Nuggets - they have a breading. When I asked for sauce, the waitress said that this was not a "Wing Street" Pizza Hut, and they did not have sauce on the boneless wings.

That's right, according to this waitress, if you order boneless wings, you get something different things depending on which type of Pizza Hut you are in.

My offense was that these were "buffalo wings", and what made them such was buffalo sauce ... which these nuggets were not actually cooked in ... you get the point.

Yesterday, I tried to complain on and found that the complaint form was broken. To the tune of, when you click you get the complaint form, you get "error in / application."

I tried again today, got the form, filled in in exhaustive detail, and when I click submit ... nothing happens. It's possible something was transferred back to the server, but my form stays the same. No email that they got the message, nothing.

Looking around the website for a contact us brings me back to ... a form to fill out and send. No email address. No phone number.

Either these people are deliberately not interested in negative feedback, or they are in desperate need of a software tester. Probably both.

I remain unsatisfied. I may blog more on this after the American Holiday.


In other news:

There has been some interesting discussion lately on the agile-testing list, about if "good enough" software is something to strive for, or if we should strive for perfection. I won't rant on that right now (You can Google "Impossibility of Complete Testing" if you'd like, or just go buy Jerry Weinberg's new book Perfect Software and Other Illusions.)

Still, I will leave you with one little rule I use for software testing:

Heusser's Postulate: The better tester you are, the more test ideas you'll have, but you get no more time. So testing quickly becomes an exercise in risk management.

Thursday, November 20, 2008

What do you want to read next?

Folks, I've got a ton of ideas to take Creative Chaos. I've been spending more and more time lately involved in discussions of KanBan - which is an evolving method that is slowly competing with Scrum.

I could write about the aspects of software quality that are hard to define requirements for - scalability, performance, reliability, usability - and what to do about it.

I could put more meat on the bones of "when should a test be automated?" (or "run unattended?")

I could talk about the business of software development, negotiating for nerds, or the dynamics of maintenance and legacy software.

Or I could just post more stories from my youth as a military cadet.

I'm probably most interested in exploring the idea of uncertainty. After all, we can't predict the future - yet we have these things called project plans with dates and deadlines and deliverables.

Those are just some of my the ideas bouncing around in my head. Still, at this point, after three years of evolution, you - the audience of Creative Chaos, might know better than me what to do next, because you know what appeals to you and why.

I covet your feedback, and promise to respond and take it seriously.

And, if you've ever just wanted a chance to heckle ... now's you chance.

Please consider leaving a comment for this post; it's your chance to steer the ship.

Wednesday, November 19, 2008

More insight from Joel Spolsky

Then suddenly I noticed (shock!) that not only was the author a journalist, not a scientist, but he was actually an editor at Time Magazine, which has an editorial method in which editors write stories based on notes submitted by reporters (the reporters don't write their own stories), so it's practically designed to get everything wrong, to insure that, no matter how ignorant the reporters are on an issue, they'll find someone who knows even less to write the actual story.
- From

In the article, Joel is knocking a style of journalism where you start with an interesting anecdote and use it to prove a point about something which you have no real expertise in.

But let's flip it around, and say a journalist was evaluating traditional development methods:

... which has a programming method in which programmers code stories based on notes written by designers that are based on requirements documents created by analysts that are assessments of what the customer actually wants. It's practically designed to get everything wrong, to insure that, no matter how ignorant the analysts and architects are on an issue, they'll find someone who knows even less to write the actual code ...

Yes, many shops to better than this. Yes, agile has helped. But before we throw stones, many development houses might be better off tending to our own knitting ...

Tuesday, November 18, 2008

The Decline and Fall of Agile

Jim Shore recently wrote a thoughtful piece "The Decline And Fall of Agile." There is a good point-counterpoint discussion on

This isn't a big shock to me; I wrote "Has Agile Jumped the Shark" Part One and Two in December of 2006; it even got a little write up on back then.

But I thought I might add my $0.02 to today's article, and share them here:
It might be better to paraphrase deming than to quote him:

"A good person, working in a system that incents him to bad behavior, will become a bad person eventually."

Example: Measure me by lines of code and bugs fixed and see what happens.

My experience with crappy agile is with companies that live in areas that lack a world-class CS school, that hire off the street, don't pay well, and provide a cubical-like existence with heavyweight policies and procedures. In this envrionment, it is very hard to attract and retain high-quality talent. What's worse, in some bureaucratic organizations, the incentives are more to be political than to excel at programming. It's no great surprise that these organizations struggled before, so they 'Adopt Agile', or "Go Agile", or whatever, but only take the easy pieces, and often insist on still having everything planned and scheduled up front. After a few projects, the business has seen a marginal productivity improvement and the programmers say things like "My job sucks less."

As always, the work you get out of it is a function of what you put in. For the work the people are willing to put into it, that's actually a decent return.

In other words, crappy Agile is a management problem. And while issuing nice Herman Miller chairs, a decent laptop, and paying just ever-so-slightly above market rates might not solve the problem - it sure will go a long way to help.

I've worked with organizations where management made a real effort to provide collaboration, choice, and some sense of appreciation for it's staff - and to work hard to make sure the organization was consistent - that it rewarded the right thing. I've worked with a few that didn't. It makes a world of difference.

JabberWocky -- II

There's more to the poem than a good URL.
First, head off and read JabberWocky.

The Poem is actually a poem-within-a-story - it is a poem that Alice reads in Through The Looking Glass.

What's really interesting is Alice's Reply after reading the Poem:

"It seems very pretty," she said when she had finished it, "but it's rather hard to understand!" (You see she didn't like to confess even to herself, that she couldn't make it out at all.) "Somehow it seems to fill my head with ideas--only I don't exactly know what they are! However, somebody killed something: that's clear, at any rate---"

Of course, Jabberwocky is, more or less, pure nonsense - a bunch of buzzwords and fluff with a couple of actual English sentances thrown in. Just enough real words to make you feel as if there is something to "get."

Does that sound familiar?

If we search and replace a few key words, Alice could very well be talking about requirements documents from some projects I have worked on:

"It seems very interesting," she said when she had finished it, "but it's rather hard to understand!" (You see she didn't like to confess even to herself, that she couldn't make it out at all.) "Somehow it seems to fill my head with ideas--only I don't exactly know what they are! However, customers will be able to do some new things: that's clear, at any rate---"

Many project documents take this approach - especially one where there is little agreement on what should be really done. So we paper over disagreement with big words. (Example: Can't agree to use a touchscreen, mouse or keyboard? Call it "The Input Device." Can't agree on how to run the project or what metrics to use? Simply insist that the project be "Quantitatively Managed." Can't agree on what a good organization looks like? Call it "Mature" or maybe "Agile.")

However, we have social problems with admitting that something is wrong ...

A) We are afraid that the document is good, and we're the stupid one.

B) We are afraid that we'll look dumb; that when we say "I don't understand this", the other person will pat us on the head and tell us that someday, indeed, we'll get it, when we grow up,

C) The team has obviously invested considerable time and effort in developing these documents. Saying we might as well start over can be taken, as, well, an insult to the professionalism of whoever prepared this document.

I don't have any easy answer to this problem. I do know that calling out dead-fish communication in any form can help. (It doesn't have to be bad documentation, it could be hit-and-run management, for example.) I do know that after living with bad documentation once, if you call things out early, it is sometimes possible to prevent them on the next project. (It works like this: "I know project X is coming. I know Alan the Analyst is working on documentation in a corner. That's just what we did on the SuperWidget project. We don't want another superWidget, do we?")

Sometimes, there's a little voice in our head that there is a man behind the curtain. It's best to admit it than pretend to be talking to the great oz.

Monday, November 17, 2008

The next (next-next) thing to being there ..

Brian Marick recently did a keynote at Agile Practices '08. While we have no video, and, sadly, no audio, he has made the entire text of his keynote available on his blog. Check it out. It's some really good stuff that has needed to be said for years.

I like this, and it continues to align with my ideas about writing in development and testing: That most people talk about the way things "should be" in some sort of abstract sense, and that the best writers talk about what actually happens in software development and what you can do about it.

Friday, November 14, 2008

A decent test URL

Say you're testing an application, like Blogger, that allows the user to type in a URL, and then should have a link show up in the edit window.

You are concerned about getting the URL cut-off the page, or beyond the input constraint, etc, and so you want to do bounds testing on a URL that is real - but also really big.

Here's one to bookmark:

Ironically enough, on Blogger, the click-through is intact but the line doesn't wrap - so - at least on FireFox - I can't see the entire URL.

We call this kind of test the quicktest - Elisabeth Hendrickson even has a cheat sheet for some quick tests to consider for input.

What are /your/ favorite quicktests?

Wednesday, November 12, 2008

And now for something completely different ...

Years ago (decades ago?) I was a cadet, cadet officer, and later adult officer in the Civil Air Patrol, the US Air Force Auxillary. While my responsibilities have pushed out the CAP, I still belong to an affiliated site,, where I used to write a column titled "Leading The Way."

After a brief discussion on the forum, I'm considering publishing a new article on leadership. Just for grins, I thought I would post it here. It's not your usual Creative Chaos Fare, but I thought you might enjoy it. If you'd like to see more, or less, of this type of material, please let me know through comments!

Picture it, Fort George G. Meade - June, 1995. Maryland Summer Encampment Staff Selection and Training. The "top three" command staff was already selected. The rest of the potential staff - 20 or more cadets ranked cadet major to staff sergeant - were vying for the great prize of leading basic cadets in training.

It was a great, glorious day, which I remember like yesterday. I pulled up in the late 1980's-era blue station wagon on Friday afternoon, sauntered over to the sign-in table, said hello the cadet deputy commander and XO, and signed in.

Heusser was here and his goal was clear - to command a squadron. Ah, squadron command. So glorious that to this day I still wax poetic about it. The opportunity to do real indirect leadership but still have someone concrete to compare yourself to - the other guy, the other squadron commander, the enemy.

Ok, maybe "the enemy" was a bit much, but you get the point.

So I signed in. Friday night we didn't do much; people would trickle in all night and the official training schedule started Saturday. We mostly sat around, told "war stories", re-introduced each other, and studied memory work. Most of the troops had just graduated in 1994 and didn't know what to expect; I'd skipped 1994, commanded a flight in '93, and was one of the old hands - one of two cadet majors applying to be on staff.

Saturday morning we work up, put on our battle dress, and waited. Not too long later (0700? 0630? I'm not sure), the command staff walked in.

Something was wrong. They weren't looking at us like staff candidates - we were cadet basics all over again - or something. I couldn't put my finger on it.

"Why aren't you formed up?" asked the cadet commander, with an edge in her voice. Oh, it wasn't mean - I don't mean to imply that we were "hazed" or anything silly like that. She was upset.

Thinking to myself, I wondered why she thought we should be formed up, and where. I kept this to myself - as they say, it's better to be thought a fool than open your mouth and remove all doubt. Besides, I was sure she would tell us.

Sure enough, she did. The training schedule, with times and locations, was listed on the sign-in table. We hadn't noticed. We were starting this whole thing off on the wrong foot.

Now time can do some interesting things. In some ways, we forget, but in others, things that were confusing can become more clear as we have time to reflect.

Now, if Cadet Staff Sergeant Snuffy knew about the schedule and told me, I'd have been remiss, but I'd also have marched the troops to the assembly area. The reality was, no one - not one - of the 20+ candidates had noticed the training schedule.

It wasn't solely a problem with our attention to detail.

It was /also/ availability of information and communications. As much as it was a problem with the troops, It was also a command problem.

Of course, I didn't say anything at the time. I knew my goal - to command a squadron - and that bringing up a complaint about the behavior of senior leadership was not the best way to get that command.

In hindsight, I wish I had said something privately, but in the grand scheme of things, I had more than a few behavior problems myself at that age.

However, that Saturday was memorable to me for more than one reason. Because that afternoon we were having a discussion about how to run encampment. And, yes, the "tear 'em down, build 'em back up" school of thought was dominant, along with the "fail them on the first few inspections to make a point" school, and yes, the idea of fear and intimidation as the way to change behavior.

And, cliche as it was, I stood up against the crowd and said something very much like this: "I think there are other ways to lead cadets. After, all when we command cadets, shouldn't our goal be to inspire cheerful and willing obedience to orders?"

That one sentence changed the tone of the entire conversation.

Now I'm not perfect, and I don't claim to have done everything right that weekend, and at the encampment that followed, where yes, I did command a squadron.

And when I reflect on all the mistakes I have made over the years, In my mind's eye, I still see that weekend as bright and shining. Because on two occasions, I saw that leadership had the opportunity to blame and abuse others, or inspire them, and I saw and preferred the better.

When you lead cadets at meetings, and activities, and at summer encampment - what is your goal?

What are you inspiring today?

And what do you want to inspire tomorrow?

Thursday, November 06, 2008

The Rise of the Priviledged Worker

Economic Times are tough indeed. Security is hard to find. Yet there are some people who seem to always succeed despite tough times - and no, I am not talking about slick Sam the salesman.

Now, Sam might have questionable ethics. We may not want to be like him, we may not like his ethics or technique - but what he does, on some level, is important. Sam solves a unique business problem: Get the cash flowing in.

I submit to you that if we can find ways to solve business problems (or create opportunities), we can avoid the attitude of "I've got fifty guys lined up around the block, you'll take what I offer and be happy" for businesses, even in tough times.

In fact, just this morning, someone emailed me an article about the priviledged worker. I thought it was a good read.

Thursday, October 30, 2008

When should a test run unattended? - III

First off, I've revised the title of of the series. I'm all for automating work that can be described and /precisely/ evaluated.

For example, let's say you have a PowerOf function. To test it, you could write a harness that takes input from the keyboard and prints the results, or you could write something like this:

is(PowerOf(2,1),1, "Two to the first is two");
is(PowerOf(2,2),4, "Two to the second is four");
is(PowerOf(3,3),27, "Three to the third is twenty-seven");
is(PowerOf(2,-1,undef,"PowerOf doesn't handle negative exponents yet");
is(PowerOf(2,2.5,undef,"PowerOf doesn't handle fractional exponents yet");

And so on.

When you add fractional or negative exponents, you can add new tests and re-run all the old tests, in order.

That is to say, this test can now run unattended and it will be very similar to what you would manually. Not completely - because if the powerOf function takes 30 seconds to calculate the answer, which is unacceptable, it will still eventually "Green Bar" - but hopefully, when you run it by hand, you notice this problem. (And if you are concerned about speed, you could wrap the tests in timer-based tests.)

Enter The GUI

As soon as we start talking about interactive screens, the number of things the human eye evaluates goes up. Wayyy up. Which brings us back to the keyword or screen capture problem - either the software will only look for problems I specify, or it will look for everything.

Let's talk about a real bug in the field

The environment: Software as a service web-based application that supports IE6, IE7, Firefox 2, Firefox 3, and Safari. To find examples, I searched bugzilla for "IE6 transparent", where we've had a few recently. (I do not mean to pick on IE; I could have searched for Safari or FF and got a similar list.) That does bring up an interesting problem: Most of the bugs below looked just fine in other browsers.

Here are some snippets from actual bug reports.

1) To reproduce, just go to IE6 and resize your browser window to take up about half your screen. Then log into dashboard, and see "(redacted element name)" appear too low and extra whitespace in some widget frames.

2) Page includes shows missing image in place of "Edit" button in IE6 and IE7

3) In IE6 only, upload light box shows up partly hidden when browser is not maximized.

4) In IE6 and IE7, comment's editor has long vertical and horizontal scroll bar.

5) In IE6 at editor UI, there is a thick blue spaces between the buttons and rest of the editor tools

6) To reproduce, in IE6, create some (redacted), then check out the left-most tab of (redacted 2). The icons for type of even are not the same background color as the widget itself. (see attachment)

All of these bugs were caught by actual testers prior to ship. I do not think it is reasonable to expect these tests to be automated unless you were doing record/playback testing. Now, if you were doing record/playback testing, you'd have to run the tests manually first, in every browser combination, and they'd fail, so you'd have to run them again and again until the entire sub-section of the application passed. Then you'd have a very brittle test that worked under one browser and one operating system.

That leaves writing the test after the fact, and, again, you'll get no help from keyword-driven frameworks like Selenium - "Whitepace is less than a half and inch between elements X and Y" simply isn't built into the tool, and the effort to add it would be prohibitive. If you wanted to write automated tests after the bugs were found, you'd have to use a traditional record/playback tool and now have two sets of tests.

That brings up a third option - slideshow tests that are watched by a human being, or that record periodic screen captures that a human can compare, side-by-side, with yesterday's run. We do this every iteration at Socialtext to good effect, but those tests aren't run /unattended/. Thus I change the name of this series.

I should also add, that problems like "too much whitespace" or "a button is missing but there is a big red X you can push" are fundamentally different from a crash or timeout. So if you have a big application to test, it might be a perfectly reasonable strategy to make hundreds of thousands of keyword-driven tests that make sure the basic happy-path of the application returns correct results (of the results you can think of when you write the tests.)

To Recap

We have discussed unit and developer-facing test automation along with three different GUI-test driving strategies. We found that the GUI-driving, unattended strategies are really only good for regression - making sure what worked yesterday still works today. I've covered some pros and cons for each, and found a half-dozen real bugs from the field that we wouldn't reasonably expect these tests to cover.

This brings up a question: What percentage of bugs are in this category, and how bad are they, and how often do we have regressions, anyway?

More to come.

Monday, October 27, 2008

Programming Parables

There are certain stories that should simply be a part of every technologist's background - they explain a kind of thinking about the world. Most of them, like the story of Mel or Winston Royce's "Waterfal" Paper, are very old and pre-date the internet.

Most of them are collected in "Wicked Problems, Righteous Solutions", a wonderful primer on system effects in software development and arguably a major pre-cursor to "agile" literature.

One little piece I found absolutely wonderful in Wicked Solutions is "The Parable of Two Programmers."

And, today, thanks to a guy named Mark Pearce, I found it on the intarwebs.

Here it is.


Thursday, October 23, 2008

Sometimes, words aren't enough

(Sidebar: More coming the test automation series. Really. Just not today.)

Some people learn through explanation. Some have emotional reactions and enjoy anecdotes. Some like statistics, and others go for "the boss said so" or appeal to authority. A good journalist knows this and weaves statistics, acendotes, logic and interesting quotes from influential people to make an article.

But some people like to learn through experience. Indeed, in many activities (skiing, golf and writing come to mind), actually doing the work and active observation will get you far farther, faster, than reading a book.

This raises the question - if you do any training of anyone (even the guy in the next cubicle), how do you reach folks who like to experience?

Believe it or not, there's a website,, that lists a series of games devoted to simulating development and understanding the dynamics of software projects.

Go check it out.

What are you favorite testing simulation games, and do you think we should start a wiki? :-)

Wednesday, October 22, 2008

On Being An "Old Dude"

Some thirty-year-old just put a post up on theJoel On Software Forum: Should I get out of tech while I'm reasonably young?

Now, our youth-obsessed North American Culture bugs more more than a little, and I took the time to reply. I'd like to share a bit of that reply here:

When Steve Yeggae is on Stack Overflow, complaining that he is old because he saw the original "transformers" cartoon movie in the theater, we've got a problem as an industry. The fact is, we've got a job where you sit at a desk and your experience grows with age. People continue to be olympic athletes, and competitive at it, well into their forties. "It's a young man's game" is something people should be saying at seventy-two, not thirty two.

All this reminds me of the Muppets. Yet, the Muppets. My family is currently watching the first season of the Muppets via NetFlix. Each show has all the muppets plus one guest. The guest is actually /established/ in the entertainment field.

Of course, back then, you couldn't really get started until you were early 20's, so all of the entertainers are in their 30's at least, with the occasional Bob Hope who was in his 50's. Not an "over the hill" joke in the bunch, these folks were finally 'making it' when they hit the muppet show, just beginning to get to the top of the ladder at 35.

Because it took them ten years to have done anything of substance and be recognized for it.

It's taken me something like that long to be recogized in the field. (Yes, last month "Creative Chaos" made the top 100 blogs for dev managers.)

Those first ten years are the beginning of the story, not the end. And the reality is that I'm not an old dude; it's only a bizarre culture change that said so. It is also a recent culture change - the muppets certainly didn't feel that way in the 1970's.

Now compare that to the britney spears/ justin timberlake / christina aguilera / 20-is-over-the-hill culture we have today.

The problem with the 15-year-old teen idol, is, well, they haven't really done anything yet. And that's the problem with the 15-year-old coding genius. Sure, one is in a million is Shawn Fanning.

The rest ... aren't.

The Mickey Mouse Club is getting old too, gosh, they must be 24 now. hmmm ... Taylor Swift?

I think you get the point.

Monday, October 20, 2008

Asperger's Syndrome

A fried of mine recently sent me this article, with a subject like something like "An Aspie comes out of the closet."

The reason? I'm one of a small community of software testers who are diagnosed (or self-diagnosed) with Asperger's Syndrome, an autistic-spectrum disorder.

The way I explain Asperger's is this: My brain chemistry is a little different than most people. Growing up, I had problems dealing with people: They lied. They said things they did not mean; what kind of shoes you wore mattered more than any of your ideas. Far from a meritocracy, it really mattered in school how far you could kick a ball, how good you looked, and how quickly you could respond to a put-down. In short, I was a nerd.

As such, I turned to computers for escape. Computers made sense. If the computer did something wrong, it was because I screwed up the programming.

I earned a degree in Mathematics, which is the most objective field I know of - answers are right or wrong. Period. Even if you don't bathe for a week and have no social skills, if you're smart, you can do well in Math. (1)

Eventually, later in life, I realized that, well, people matter more than things. Having a room full of toys and no one to share them with is no fun. To be successful in any relationship, including the workplace, you need to understand people. So I got into people, psychology, and relationships. I forced myself to learn.

I used to feel bad about this -- right up until I read the very same story in a book by Jerry Weinberg called Quality Software Management. And I do mean the very same story.

It's a true story for both of us -- and, I suspect, for my friend who sent me the link that started all this. This means I have problems reading people, understanding social clues, and responding quickly with words - in the moment. (Such as: Off the cuff jokes) Ironically, that's part of why I got into public speaking -- in public speaking, you pick your words in advance, and you can practice them over and over. Ditto for writing.

So I wasn't surprised in 2001 when I read a description of Asperger's in Wired Magazine and said "that's me." Yes, there is more to the diagnosis than that, but I'd prefer to keep that part private.

The classic description of an Aspie is a "little professor" - someone who is seriously, seriously involved in particular subject area and (sometimes) has problems relating outside of that subject area. As a young person, I craved a structured envrionment that made sense; one of the reasons I loved playing cadet was that I knew who to salute and how to march and how to wear a uniform - the rules were explicit and written.

And those who knew me as a cadet also new that I ... needed a little help socially.

In the 1940's, someone with Asperger's might they collect stamps, or coins, or have a model train collection, or maybe knew every single baseball stat for a particular team. Today, Aspie's are more likely to write code, test software, play with CSS style sheets or design aircraft.

The end result of all this is that I'm not typical. Duh. No one is really average in every way. If I had my choice, at the beginning of my life, to be an empty suit with great social skills or someone who was able to generalize, abstract, create, and do wonderful things ... I don't think It'd be a tough choice.

For centuries, it has been ok for artists to be a little bit weird - In fact, I remember one graphic designer who used to wear flip-flops to work (that he promptly took off) combined with some sort of odd faux-prisoner outfit.

That little spark of oddity about the creative person on a bad day is the same thing we credit as the spark of genius on a good one. News Flash: Techie folks can be creative too.

What I'm trying to say here is - I may have Asperger's Syndrome, or it may be what they called "Disgraphia" when I was in Grade School - or it may be something else.

I don't believe it's the kind of thing to be hidden, but I've never made a post on it.

And when I got that email, well, the time seem right.


--Matt Heusser
(1) Yes, I bathed. Gosh, it's an expression!

Cloud Computing is the new XML

In April of 2000 I took a development course and got a free copy of XML Magazine. I didn't get what the actual value of the technology was. A few months later, I realized that there was nothing to get - if XML had value, the guys hyping the magazine didn't know it yet.

And if you don't get the same feeling listening to the gurus talk about cloud computing, I suspect you haven't been listening close enough.

But first, the Good News
Just like the XML, Cloud Computing does have some places where it can add value. I argue that, in the next few yeas, we will see some generally useful applications and niches for cloud computing. Eventually, over time, some organizations will be be able to to 'give up' their data centers and turn on web servers like a we turn on electricity or tap water - but anyone familiar with virtual web hosting allready has a deal like that. Eventually, it might be possible to place our servers and databases in the grid and turn up the number of servers when demand spikes. Still, Tim Chou wrote "The End Of Software" four years ago, and the sad fact is that that capability is still years out. What do I believe about cloud computing today?

The Bad News
- Cloud Computing will have limited applications compared to the over-hype it has now,
- It will take years to realize those applications,
- Exactly what you should be doing with cloud computing will probably be very different from what todays 'visionaries' are telling you
- The 'visionaries' who are currently hyping cloud computing will probably shake out of the market before the true value and applicability of cloud computing is realized

It would be nice to be proven wrong. For the time being, I believe the smart money is against the hype machine.

Cloud Computing is the new XML.

Remember: You heard it here first.

Friday, October 17, 2008

Blank Sheet of Paper Syndrome

(No, I haven't forgotten about Test Automation. I'm just trying to leverage my time in the best possible way. Most of this post came out of a recent discussion on the SW-IMPROVE discussion list ...)

One thing that I found helpful when gathering requirements is something I call "blank sheet of paper syndrome." That is to say, if you give a (brand new) analyst a template, they will often react with something like this:

"oh, easy, I can do this. Project Name, Project Manager, Executive Sponsor, Initiated Date, Today's Date, Desired Date ... I can fill this out." (... 30 minutes pass ...) "Here's your requirements doc."

On the other hand, if you give them a blank sheet of paper, you are more likely to have a reaction like this:

"But ... well ... this is blank! I have no idea what to write! I don't really know what the customer desires! I had better go find out!"

In my experience, this second reaction is much more likely to result in figuring out the /essential/ requirements of a software system, instead of gathering requirements that "all look like each other."

The argument /for/ requirements templates is that without the template you would forget something. My personal conclusion is that, for the vast majority of projects I have worked on, I will gladly run the risk of forgetting something on the template, in trade for the benefit of, hopefully, capturing something more essential on a personally-written document.

Now, I would be remiss if I did not add that the Planning Game in Extreme Programming - where you have no person in the middle and negotiate the requirements, is one valid implementation of blank sheet of paper concept. You cards could even have a slot for title, points, priority, and description, and I wouldn't be offended. :-)

In other words, Jerry Weinberg's Rule of three, in that you should strive for at least three options for every problem, still applies. "No Template/Blank Sheet" and "All Template/No Customization Sheet" is a false dichotomy - you can always do more or less, allow some customization, have a half-dozen questions instead of a three-page template, and so on.

All that I am saying here is that in the push for stable/predictable/repeatable, some organizations try to make all projects look the same. Sometimes, for the project team, that can do more harm than good.

Thursday, October 16, 2008

So what's a Privateer Scholar, anyway?

A few months back, I changed my LinkedIn title from dev/tester (or whatever it was) to "Privateer Scholar."

The title is derived from James Bach, who took it from Buckminster Fuller's Operating Manual for Spaceship Earth.

Here's the basic idea in Bach's words:

* A buccaneer-scholar is anyone whose love of learning is not muzzled, yoked or shackled by any institution or authority; whose mind is driven to wander and find its own voice and place in the world.

* This way of being has sometimes been called autodidact, individualist, anarchist, non-conformist, contrarian, bohemian, skeptic, hacker, hippie, slacker, seeker, philosoph, or free thinker. None of those fit for me.

Now, Bach refers to himself as a buccaneer because he is an independent contractor/consultant - he's really on his own.

I, on the other hand, like to try to work within organizations. I work within a specific authority (Socialtext, or Calvin College, or Maybe BZMedia) against it's enemies (competition - and at Calvin, the competition is called "Ignorance"). In piratical terms, for the time being - I seek a letter of marquee.

For more detail, see Bach's presentation on the subject.

Monday, October 13, 2008

When should a test be automated - II

Before we can dive in, let's take a step back.

When people talk about automation, they typically mean replacing a job done by a human by a machine - think of an assembly line, where the entire job is to turn a wrench, and a robot that can do that work.

Oh, at the unit level, where you are passing variables around, this makes a lot of sense. You don't need a human to run the test to evaluate a fahrenheit-to-celcious conversion function, unless you are worried about performance, and even then you can just put in some timers and maybe a loop.

But at the visual level, you've got a problem. Automated Test Execution comes in two popular flavors - record/playback and keyword driven.

Record/Playback does exactly what you tell it to (even at the mouseover level) and does a screen or window capture at the end, comparing that to your pre-defined "correct" image. First off, that means it has to work in the first place in order to define that image, so the only things you can record/playback are the ones that aren't bugs to start with - but more importantly, that means if you change the icon set, or if you image contains the date of the transaction, or if you resize the windows - or screen - you could have a test that passes but the computer tells you fails.

To fix that, we created keyword-driven frameworks, where you drive the GUI by the unique ID's of the components. A keyword-driven test might look like this:

click_ok, field_last_name
type_ok, field_last_name, heusser
click_ok, field_first_name
type_ok, field_first_name, matthew
click_ok, submit_button
wait_for_element_present_ok, //body, hello matthew heusser, 30000

Keyword-driven tests only look at the elements you tell them to. So, if the text appears on the screen, but the font is the wrong size -- you don't know. If the icons are wrong, you don't know. In fact, the code only checks the exact things you tell it to.

At the end of every manual test case is a hidden assertion - 'and nothing else odd happened.'

Keyword-driven tests fail to check that assertion. Record/Playback tests try, but fail to have any discernment, to know if the change is good or bad.

But that might be just fine. Keyword/Driven might be good enough for some applications. In others, we expect the image to never change. We can use automated tests as part of a balanced breakfast to eliminate some brain-dead work so we have more time for critical-thinking work.

The question is what, when, and how much.

Stay tuned.

Friday, October 10, 2008

When should a test be automated - I

I stand behind my last post on the Holy Grail, but it was often mis-interpreted as "no test automation."

Now, certainly, that's silly. At Socialtext, we use all kinds of tools to assist in our testing, some of them traditional run-capture-compare tools, others setup, bug tracking, grep, analysis, summary ... the list goes on.

That leads to the question "when should a test be automated" - of course that's a little bit silly, as having a computer look at one field, and having a human look at an entire window are two different things, but I do believe that it might be more helpful if I actually explored the area and provided some tangible advice.

To start with, let's take a look at Brian Marick's Article:

"When Should A Test Be Automated", that he wrote in 1998

I will use that as a jumping-off point. So take a good look, and tell me what you think.

More to come.

Wednesday, October 08, 2008

Complete test automation is the Holy Grail?

Recently, Phil Kirkham mentioned a comment he'd heard that he was puzzling over:

"Of course, complete test automation is the Holy Grail of software development."

The speaker was talking about interactive (user-driven) systems, and probably meant that automated test execution (do-this-compare-to-that), giving complete confidence in the push of a button, was this thing people searched for that had the magical ability to solve all your problems.

As I am a bit of an Authurian Legend buff, this intrigued me.

But what is the Holy Grail, really? WikiPedia says it is the cup (or maybe plate) used at the last supper - the one dipped in by both Christ and Judas. Or maybe it was a cup that held Jesus's blood. In whatever case, it's magical, and can heal people. Maybe. We think. Sorta.

As a Catholic, I do believe in the possibility of relics and sacred tradition, so I looked it up on Catholic Encyclopedia. You can read the entire article here, but just check out the summary:

A word as to the attitude of the Church towards the legend. It would seem that a legend so distinctively Christian would find favour with the Church. Yet this was not the case. Excepting Helinandus, clerical writers do not mention the Grail, and the Church ignored the legend completely. After all, the legend contained the elements of which the Church could not approve. Its sources are in apocryphal, not in canonical, scripture, and the claims of sanctity made for the Grail were refuted by their very extravagance. Moreover, the legend claimed for the Church in Britain an origin well nigh as illustrious as that of the Church of Rome, and independent of Rome. It was thus calculated to encourage and to foster any separatist tendencies that might exist in Britain. As we have seen, the whole tradition concerning the Grail is of late origin and on many points at variance with historical truth.

Let me put that in testing terms:

"As we have seen, the whole tradition concerning the test automation legend is of late origin and on many points at variance with historical truth."

In other words, the Holy Grail is is the stuff of fairy tales, said to have mystical powers but never actually seen by anyone. King Arthur's legend was an interesting story from my youth, a fun little adventure to pretend as a child - but when I became a man, I put away childish things.

Perhaps you could say that test automation is the Holy Grail of software development, after all.

Monday, October 06, 2008

New Annoucements up

(See below the "Creative Chaos" banner and description. You know, the text that's always the same that your brain filters out? It's different this time. Really.)

Working at an innovative software product company is simply amazing, and teaching information systems at night is one of the great honors of my life. At the same time, I've been able to teach religious education on Sunday, keep the monthly column going in Software Test & Performance Magazine, and even do a little speaking and, I had hoped, get some work on a book.

And I'll be coaching soccer in the spring. The problem is: How do I find the time to do a /good/ job as a coach? Because to do all this, I've been shorting the people I claim are the most important in my life: My Family.

Something's gotta give.

I have to learn to say "No." Now, I love conference invites. I'm honored, and I work really hard to make things work. But please, don't invite me to speak on a different continent with two months notice. That just isn't going to work. Odds are, if you lead a professional group, I'd love to speak in front of you, but please, let's talk June 2009, ok?

Of course, the first thing to go will be the blog. I'll be around, but not as much as I might hope. Expounding on how 'agile' is an /attitude/, or writing that treatise on test estimation that really hits the nail on the head -- that might have to wait.

Do you ever have to make tradeoffs between the good and the best? If you do, please tell me about it. I'd love to hear.

Warm Regards,

--matt heusser

Tuesday, September 30, 2008

Where in the world is Matthew Robert Heusser

For those who don't know, the engineering team at Socialtext has been very busy putting the finishing touches on our next-generation product.

And there's video.

You can sign up for a free 14-day trial account on our website here:

If you know exactly ever tiny bit about you 2nd Counsin Joe's trip to disneyland thanks to Facebook or Myspace - but can't get critical information about how things are done by the guy two cubicles over - maybe you should bring those benefits to your business with Socialtext.

More posts to come, but, seriously, I've got a product to get out the door.

Tuesday, September 23, 2008

This isn't the agile you're looking for - Part II

Unfortunately the industry has latched on to the word "Agile" and has begun to use it as a prefix that means "good". This is very unfortunate, and discerning software professionals should be very wary of any new concept that bears the "agile" prefix. The concept has been taken meta, but there is no experimental evidence that demonstrates that "agile", by itself, is good.

The danger is clear. The word "agile" will become meaningless. It will be hijacked by loads of marketers and consultants to mean whatever they want it to mean. It will be used in company names, and product names, and project names, and any other name in order to lend credibility to that name. In the end it will mean as much as Structured, Modular, or Object.

The Agile Alliance worked very hard to create a manifesto that would have meaning. If you want to know their definition of the word "agile" then read the manifesto at Read about the Agile Alliance at And use a very skeptical eye when you see the word "agile" used.

- Bob Martin, circa 2003

Bob Martin's advice from 2003 seems just as applicable today. Which, perhaps, explains my reluctance and concern about companies "going agile", "doing agile", "adopting agile", and so on.

As I re-read the Agile Manifesto, it seems to me that it's largely about what attitudes are more effective in developing software. Not process - attitude.

So let's play a game and go back in time, close to a decade ago. I'm sitting in a meeting, hearing about how are going to develop the last great, grandiose, development framework that will be extensible, then after completing the framework, developing the code will be easy. And I'm not buying it.

"Have you guys ever noticed that every time we do this, our crystal ball is wrong? We have unlimited extensibility left and right but the customer always - always - wants us to extend up and down. Let's work on malleable software, software we can change if the requirements do - and deliver something right now. This week. This month. Then, if we get asked for something similar a second time - then and only then - do we generalize."

Of course, I "didn't get it", had more to learn, let's pat Matt on the head and move on. (Don't worry. It turned out ok in the end. Really.)

That's agile. I don't think you "do" that. I believe it's something you are.

So the heart of changing to agile is not process (yes, breaking functional teams into project teams can be hard, that's a sidebar) - but a change in understanding.

It's not as easy as hiring a scrum master - because the idea of hiring a guru who will tell us all how to do it is still command and control.

Still more to come.

Monday, September 22, 2008

Tester Education - II

The basic idea behind my tester education post was very simple - I would love to publish a book, and the time for it may be coming soon. At the same time, I've got a full-time day job working for a silicon-valley company in the middle of a product push, I teach computer science and information systems at Calvin College Part-Time, I have a monthly collumn in Software Test and Performance Magazine ... and a family and real responsibilities in my community.

Oh, and I just agreed to do the Voices That Matter Conference in San Francisco, California in December. The book may still be a ways off. We'll see.

In the mean time, please check out the Voices that Matter Conference and tell your friends and neighbors. It's going to be a blast ...

Friday, September 19, 2008

This isn't the agile you're looking for - Part I

In the land of software development, Steve McConnell is arguably a giant among men. Seriously - this is the guy who was a developer-contractor-project-manager at Microsoft back when they were good, who took the practices they had evolved and wrote them down, publishing code complete and rapid development. After Microsoft, he founded , a Construx, software-method training consultancy that also does a little do-ing, and has done well.

I am happy for him.

Steve's books pre-date the concept of "agile", but they deal with things like uncertainly and scope in way that was very rare for the early 1990's. Compared with other approaches, Steve's ideas are well-thought, well-considered, sane, and have a chance to actually work.

And, about about two years ago, he started to respond to agile. First, it was with speeches about how agile methods could fail - then it was how his company could help.

Then, yesterday, I got a piece of print advertising (yes, they still make it) from Construx. Construx was going to teach me how to be agile.

I had hoped Construx was secure in itself enough that it didn't have to add "And Agile!" to it's seminars in order to attract training dollars. (Over the past three years, I've seen a number of old-school consultants teach 'agile' seminars. Construx probably held out the longest.) I know some of these folks; they are generally full-time trainers who don't find any time to actually do contracting. How they went from traditional development techniques to teaching agile without actually doing it, I'll never know.

Or ... Maybe I do. I may not have a business degree and I'm certainly not a marketer - but I do know that Agile has become a marketing slogan. Maybe it always has been. Still, in 2002, when the people pushing agile were Ron Jeffries, Kent Beck, Elizabeth Hendrickson, Ward Cunningham, and Martin Fowler - these were people who first walked the walk and developed a pile of projects using the techniques - then taught the seminars.

Recently, I've had increasingly distaste for the terms 'going agile', 'doing agile', and 'agile or not.' I think I am beginning to understand why.

More to come.

Thursday, September 18, 2008


I am card-carrying member of the Association for Software Testing (AST). AST memberships cost $50 USD per year, and are about to go up to $85.

I also pay ~ $100 a year for my membership in the American Society for Quality (ASQ), which traditionally I have found a sponsor or employer to fund, but you never can tell what next year will bring.

Meanwhile, the Agile Alliance has lately sponsored several events I am involved in (the tech debt workshop and the Great Lakes Software Excellence Conference) and I am considering join them, at $100 a year, as a matter of principle and good faith. They also have programs that might align well with my interests.

That would be $285 per year in memberships. That's a lot more than my family pays for the zoo and the nature center.

What associations do you belong to? (ACM, IEEE?) And how do you make the return-on-investment vs. good citizenship decision?

Friday, September 12, 2008

Tech Debt from the IntarWebs

Once again, IT people thinking we're all unique and special flowers.

Call center rearranges the cubes to fit eight more people in. They don't have power or LAN drops where they need them. They can rip out the drop-cieling and start over, or they can splice like mad and have them in by Monday. But when you want *another* eight drops, you're screwed because you'd be overloading a circuit. That's technical debt.

There's a pothole on the bridge. You know that patching it means it will leak water into the superstructure, causing rust and premature failure. But patching it takes two hours, resurfacing the bridge takes two months. That's technical debt.

The fryolator has a busted alarm. Fixing it could cost a few hundred dollars. Or you can tell the fry guy to keep an eye on it and try not to burn too many fries. That's technical debt.

I'm not agreeing with the OP that technical debt doesn't exist. Just saying there's nothing at all unique about it. Choosing to let it build up doesn't make your boss an idiot.

-- Some guy on the JoelOnSoftware Forum

Wednesday, September 10, 2008

Research Project on Tester Education

I've got my owner personal ideas of what the pain points are in software testing, and my own personal recommendations - and I talk to a lot of people. Still, I'd like to vett those ideas before I put them forward to a larger audience.

So, I am conducting an informal research project on test education.

If you have 15 minutes to give to help me figure out if my ideas are relevant - and how to make them more relevant - please drop me an email -

I'd love to hear your thoughts and concerns, and I will also have a few follow-up questions. Based on those, we may have more to discuss; the whole chain could be six to eight emails.

Of course, you can always leave comments in my comments section for the world to see.

Friday, September 05, 2008

Newest issue of Software Test&Performance Magazine

The September issue of ST&P is out. It's on testing software developed on the Microsoft Stack; our monthly collumn is an introduction to Microsoft's Development and Test tool terminology. As always, you can download the issue for free or get a print subscription.

What do you think?

Friday, August 29, 2008

Heusser's First Law Of Software Engineering

I got to use this term twice in the past couple of weeks, so I'm going to call it a real thing now. Here goes:

Heusser's First Law of Software Engineering
A conceptual model of a bunch of guesses multiplied and divided by each other is generally worth about the same as the web page it is printed on.

Of course, you're probably going to ask for an example .

UPDATE: I don't mean to be too critical. Asking your customers to evaluate and rank the importance of the software before you build it - to set a vision - to enable people to make tradeoffs that align with your vision - is a good thing. It's when you try to take these better-than-nothing guesses and make them feel like science - feel like proof.

I am especially leery when people drag out the summation symbol (it's a big capital E) from 1 to n of f(n) divided by n, using impressive looking graphics.

I look at the symbol and think "Hey, dude, why not just say the average of?" - especially when the text doesn't even bother to say "This symbol is the average of the values."

When I reach the point, I begin to suspect that the authors are preying on the math-illiterate.

Hence, Heusser's first law of software engineering.

Thursday, August 28, 2008

Bruce Lee!

Someone on the Software Testing discussion list just posted a link to quotes from Bruce Lee, the famous American-Born Martial Artist and Actor.

As your read the quotes, it's pretty clear that Lee strove toward self-mastery. Now, I am not into eastern philosophy, and I am certainly no martial artist, but I *am* into excellence - hey, I founded the conference on it - and when I read Lee's philosophy, I am struck by how similar it is to my approach to testing and development.

Here's the quote page - I hope you enjoy it.

Tuesday, August 26, 2008

Tech Debt - The IT Manager's Dillema

Chris Sterling has a Thoughtful Blog Post about the IT Manager's (nearly inevitable) decision to take on tech debt in order to hit a project date.

Here's the comment I left on his bog in reply:

It depends. I agree, in general, that IT managers are behaving in ways that seem rational for the system that are participating in - if by rational we mean that they are figuring out the things that get them rewarded and doing those things.

That is moral relativism; you can use the same logic to excuse the American Slave-Holding Southerner in 1860.

So while I may not like the behavior, and I might not even think it's right, I admit that it's nearly impossible for an IT line manager to rise above it.

I don't look to IT managers to solve this problem. It is not the IT manager that actually /does/ the shortcut of "bad" technical debt; he simply exhorts and begs and pleads the tech staff to go faster.

It is the tech staff that makes the hacks, and thus the tech staff that needs to change behavior.

How do they do that? It's pretty simple(1). Give estimates that are /responsible/. When asked to compress, negotiate scope, not features. Constantly improve our craft. Periodically reflect on the work we are doing. Mentor others and seek mentors. Most importantly, never, ever make the same moral mistake of the Nazi Prison Guard when taking on tech debt: "I was just following orders."

I'm not saying put you company out of business because you need to take a "principled stand", I'm saying technical folks need to take responsibility for our tech debt and not blame management(2).

Now, I don't want to eliminate all tech debt. I don't even think that is possible. But if we can reduce it by a sizable fraction - say cut the average (bad) tech debt of a shop in half - we will significant increase the velocity of software development, thus increasing the financial stability of our companies, and our own sense of health and well-being.

The void of bad code is a pretty big hole - an empty bucket. If what we do can be a sizable splash into that bucket, well, I would be pleased.


--matt heusser

(1) - I said simple, not easy. Personally, I am interested in mental constructs designed to make the personal choice t "do the right thing" less painful and more rewarding. I shared one in the workshop: Limit moral hazard in the workspace by getting the dev closer to the customer.

(2) - If you look carefully at my comments during "the weaker brother" at the tech debt workshop, that is one thing I consistently did - took personal responsibility for my tech debt choices, instead of blaming the management bull-whip. We need more of it.