Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, June 29, 2009

On Business Maturity

I just posted this to a private discussion list, and thought it was worth repeating here:

Chris McMahon wrote:
>For every company whose expensive Six Sigma project yields
>them no benefit at all, there is another company with no
>recognized quality process at all that succeeds wildly.
>


Have you ever studied Michael Porter's Competitive Strategy Model?

Porter - a Harvard Business Professor - wrote industries go through a transition from wild growth and no standards to maturity and eventual decline.


Companies competing in the growth phase compete by differentiation of /product/. (Think the personal computer market in 1984). In the middle, standardization and consolidation occur, which is happening in the personal computer market right now. At the end, toward the right, you are dealing with commodities like Gasoline or Electricity that have no differentiation at at. Companies living in maturity and decline compete through standardization of /process/ and economies of scale.

Once in a while a disruptive innovation comes along, which can push the entire industry to the left. Consider, for example, book sales in 1993. Borders and Barnes and Nobles were mature businesses. They had defined processes and metrics - and they aimed to turn the corner bookstore into a memory. If you looked at where people spent money, not what they said, nobody cared about the service at the corner bookstore - they wanted variety, comfy chairs, and decap frappechino mochas.

Then came amazon.com with a disruptive business model and a disruptive model of scale - pushing the industry to the left.

That's what lots of software does - It pushes stuff to the left.

And when you are competing on the left - if you are Apple in 1984 or Linux Torvalds in 1991 or Napster in 1998 - you don't need great software development process. You need great ideas.

That's part of what bugs me about the discussion of software maturity. The real innovation and value isn't made in the land of maturity and standards. It's made in the untamed wilderness ... hmm.

I suppose you could call that 'Creative Chaos.'



Epilogue: So that's what I wrote to the discussion list, but this is my question to Creative Chaos readers:

Is what I wrote above the case? And if so, how should that impact the way we test software?

What do you think?

Friday, June 26, 2009

Corey Haines on Metrics

Corey Haines is more a pure developer-type who does test automation, and he is heavily involved in the "Software Craftsmanship" movement. He's made a bit of a name for himself by travelling around the country, pairing and interviewing other practitioners who are serious about doing a good job in software work, with a development emphaisis.

And he just released a video on metrics ...

Road Thoughts - Visible Metrics from Corey Haines on Vimeo.



Now, these are dev-facing metrics, culled from the codebase and automated tests themselves. You wouldn't have to enter these into a spreadsheet and email them to your boss once a week, nor would your company have to pay a few thousand dollars per person for a tool to do this for you. So it has certain inherent advantages over most test metrics.

That said, I like the general idea: That a practitioner would take a series of measures in order to personally understand and improve in his work.

This is very different from many metrics discussions, where it is assumed that management will consume the metrics for the purposes of evaluation.

The former one has a good chance of working. The latter tends to introduce dysfunction, as the team will find ways to give management lots of whatever they are measured by, and this may or may not correlate to actual improvement.

Tuesday, June 23, 2009

So prove it!

When you are listening to a software development guru, do you ever get a strangle, niggling feeling in your mind? Something like "if this guy is so awesome, why doesn't he go build something awesome instead of preaching to me?"

Now, let's be fair. A lot of the people who speak publicly about software development do have day jobs and do build working software. The vast majority of them have done some software dev or testing at some point.

But consider Eric Reis of imvu. He's given talks and run seminars on continuous deployment. Yet when a few master tester's went and actually tried to use the software, they found plenty of room for improvement.

So what about that "Matt Heusser" guy. Wouldn't you like to be able to use the software he is responsible for testing?

Well, folks, I don't push Socialtext much. It's a web-based product you can use to improve communications in your business, with everything from project plans to business process to tracking status across timezones. I believe in the product, I took the risk of my career to come here - and if you really want to know about it, you'll ask.

Then, today, came the big news: Socialtext is giving away a fifty-seat license of our product. That's right, you can get a business wiki (editable web pages), blogging platform, people-tracker package, twitter-style secure micro-blogging for your business. You also get access to our web-based distributed spreadsheet currently in beta. And we support firefox 3.0x, safari, Internet Explorer 6 and 7 for everything but Socialcalc, which is FF 3.0 only.

Of course, we have a premium model with more support, more users, integration into a directory, hosted behind your firewall, and so on.

But if you want to see what Matt has been up to, you can check it out for free, right now, hosted on our severs over the web, so you'll have nothing to install:

Press Release

Media Coverage

Click here and give your email to get start.

The intention here is to give the software to businesses and small business units, so you'll want to use your work email and invite other people from your work. The license does not provide support, but if you have questions of the "ok, what is Socialtext and how can I use it" nature, I'm happy to answer and can talk you through it.

Outside of the day job, I do think Socialtext might be a good fit for a secure, invite-only network for expert testers. More to come ...

Sunday, June 21, 2009

Um ... what? - II

(Bear with me, it's worth it)

Recently, on the Agile-Testing List, I wrote:

I'm afraid we've gone so far afield that I can't remember the entire initial question. I believe it was about alternatives to 100% acceptance test automation?

As I said before, I wrote an answer but it sounded lectur-y. My experience was that there are lots and lots of different things that various organizations did to limit the risk of regression error prior to agile, especially over time as the codebase got big and old.

It seems to me that this "codebase getting old, regression testing getting expensive" is a common problem, and that the second law of thermodynamics comes into play. Systems tend to fall apart; the center does not hold. There are a variety of things one can do to limit risk. Pre-Agile some of your choices were:

- Very large releases, with a long, drawn-out "test/fix/retest cycle" toward the end. (Waterfall). ("How's that working for you?" is implied)
- Surgical Code Changes designed to limit possible ripple effect
- Taking on a larger amount of risk by having a smaller amount of test coverage, however you chose to measure it
- Getting really good at evaluating what the risks were, such that you could cover a more meaningful portion of the code in less time
- Rapid Software Testing and similar techniques designed to deal with change as the codebase grew
- Some automation, especially at file-system level
- Beta Programs designed to limit risk
- Eating our own dog food
- Model-Based Testing (see the work of Harry Robinson)

Today, the list of choices is longer and more palatable, including pair programming, TDD, Continuous Integration, ATDD, browser-driving tests of various flavors, "rolling back" production on failover on a SAAS webserver, slideshow-style tests, etc.

One thing we do know is that pre-agile, IBM, Borland, and Microsoft developed and evolved working software reasonably often. Historically, when you look at what those teams actually did - in terms of people over process, collaboration over documentation, etc, It looked a lot like an 'agile' process without the modern techniques. For the most part, those techniques were not yet available to use.

Is that what you're looking for, George?


My colleague, George Dinwiddie, a person I like and respect - replied:

Wow! You've got experience with teams that did /all/ of those things? Which of those approaches gave you the most confidence that old functionality had not been damaged by the new additions or bug fixes? Which of those approaches scaled the best for you as the applications got older?

To which I gave a final answer:

Of course I've used all those techniques at one time or another. Suffice to say it depends on your team, your risk analysis, and the constraints on the project. The answer starts to look more like a book than a post, and I've totally monopolized the list lately. (I am so sorry for that triple post!)

I'll think on it and maybe do some blog posts.


Now, take a minute and read my final reply again. Consider it, and look at it with a critical eye.

If you were an outsider to the profession, could that final answer look a bit like hand-waving? Or the previous answer where I gave the big list o' risk mitigation techniques: Couldn't that look like a list of buzzwords?

For that matter, did I refuse to answer the question at the end? Shouldn't he just press on and ask it again? And if he did press on, would I insist that he 'didn't get it'? Or maybe imply he needed to read a large collection of books before he was qualified to ask about it?

Aren't Matt Heusser's Comments above a great example of the problems he listed on Friday?

Wait, Wait ... stop. Rewind. Let's start over.

I do hold that my post above was reasonable. I don't think it crossed any lines. But to someone outside the profession, it could be misconstrued. So how do you tell the difference?

This is an important question. Let's examine the problems, one by one, and discuss about it, using the conversation outlined above as an example.

1) Appeal to Goodness, thinly disguised

I understand that Google has a saying "Don't be evil", that is a kind of shorthand. For example, if Google was considering analyzing emails for key words, then selling email addresses to spam providers by keyword, an employee might legitimately say "... but that would be evil."

That's not a label used to destroy the idea; it is a value judgement. It isn't disguised at all. And it's perfectly fine.

How can we tell these apart? Ask for the logical consequences that flow from the idea. In the example above, the speaker might say "It's not respecting the implied right to privacy."

Compare that to "... a mature organization would not behave that way."

See a difference?

2) Retreating to big words or hand-waving

I can picture myself saying "Well, we are a SAAS vendor, so we don't have a deployment problem." Now, that actually means something. I imagine many readers, familiar with my shorthand, know exactly what I mean. And some don't. How can you tell if that is shorthand or hand-waving?

Ask for examples. Just one. In that case, I would reply that SAAS is "Software As A Service." Our customers don't have to install boxes or copy CDs - they can simply rent logins to our webservers. Thus, to deploy to production, a SAAS company doesn't need to send a thousand CD's to a thousand vendors, they can simply update the production servers with a rollout script.

If the speaker doesn't give you an answer, or changes the subject, well, what could that mean?

In some cases, the speaker may simply not want to invest the time in the explanation. Ok. So ask for a link to do your own research, or, better yet - find another expert in the same field who is more helpful. Show him the transcript. Do not bias him with your assessment (that the wording is probably a bunch of nothing.) See what he comes up with.


3) Insistence that you "just don't get it"

Once again, it's possible the speaker is tired and does not want to invest the time in answering. It's also possible that the speaker realizes you have little shared agreement and would have to go back to first principles. In the example above, I was asked for best practices for risk management.

I don't believe in best practices. I belong to a community that essentially censures the term. So, after Janet Gregory wrote in to assure me I was not monopolizing, I created this reply:

Thanks Janet.

George - To speak to your question, I believe that methodology design is about trade offs. ( http://www.informit.com/articles/printerfriendly.aspx?p=434641 )

As such, I can not recommend best practices outside of a given problem domain ( http://www.context-driven-testing.com/ )

However, if you would like to hear stories of the companies I've worked for, and what constraints we had and what trade offs we chose to make - or if you want to give me a hypothetical company and work through an exercise to figure how we'd approach the problem of testing - I would be open to both of those.


I hope you can see the difference between that and "you just don't get it."

4) Insistence that the reason you don't get it is because it is "hard"

This is similar to #3 or #2 - you simply need find another expert in the field, and ask them for an analysis.

For example, I actually do know a fair amount about CMMI and software process. And, in some conversations, my BS indicator has gone off. And I've gone and asked two to five CMMI experts (SCAMPI lead appraisers) what a specific quote means.

And I get answers that are all over the map.

This tells me that the example isn't actually saying anything.

5) Abuse of the Socratic Method

The Socratic method can be a very helpful and valuable method of experiential expression. And, when the person positioning himself as the "teacher" actually has less understanding than the "learner", it can break down pretty quick. That is what I was trying to get at in my example.

So how can you tell? Well, when you answers are reasoned, sincere, correct ... and the follow-up answers begin to indicate that the other person didn't hear, wasn't listening, didn't understand, or considers them irrelevant.

It's a prickly feeling on the back of your neck. "This ain't right." On the other hand, if the person has more experience than you, the Socratic Method will feel much more free flow - for example the teacher may ask "how's the working for you?" and you grin sheepishly "not so great."

If things aren't happening like that; if the speaker can not predict your problems, and in fact does not understand them ... should he really be leading you in the Socratic method to find the solution?


6) Aggressive questioning

What if the other person genuinely wants to learn? Or what if they are asking a question that is a genuine objection to your statement? How can you tell the difference?

As I wrote in the example, Aggressive Questioning has a motive. It is a form of posturing. Now, I am very hesitant to assign intent - I prefer to work on behavior. So my short answer is it doesn't matter.

If you are being challenged, consistently, in a way that makes your blood boil, you are likely to become defensive. A defensive posture looks bad ("Here's why I should be in this meeting! ) and a defensive person is likely to make a verbal mistake.

So I recommend turning the questions into a statement from the other person you can respond to. "I'm hearing concern about risk on the project. Do I have that right?"


7) Appeal to irrelevance

By the time you are pulling an external authority out ("You know, Tom DeMarco says private offices are the way to high productivity") and the other person is ignoring or insulting those external people - you've got a problem. There's probably a trust issue involved.

It is possible that you are pulling out external authorities to prop up your argument ... because your argument needs propping up. "They don't respect me, but maybe they'll respect James Bach" goes the subtle mind-trick. How do you fix that? Ask yourself "What can I do to get the respect of my peers?" (I could do a blog post on that, if you like)

8) Changing the subject

Again, this is mostly a political maneuver, designed to replace an unpalatable talking point with a more familiar one. When could changing the subject be good? When the question itself contains an assertion, such as:

"We know you lied about the bug count at the meeting last week. Why did you do that?"

In that case, the speaker can change the subject by challenging the premise. Likewise, if the question is truly irrelevant (my neighbor asking about my sex life, my daughter asking about her Christmas gifts early), I may duck the question. It is hard for me to imagine cases like that in a software development environment.

But It can happen; here's one: An executive wants to fire the person responsible for a defect, and the team's manager knows it is Joe, but says "we all share the blame" or "as the manager, I supervise the team and am responsible for the outcome. Blame Me." (Or the manager refuses to "rank order" the staff because he believes the entire team worked extremely hard on the latest release.)

Ducking the question? I suppose. Also possibly Heroic.

Conclusions

Sometimes, it can be very helpful to take an aggressive stance and say "something ain't right here." Sometimes, an expert in the field may say "look, you've got have a basic understanding of calculus to talk about Newtonian physics. Go read a calculus book."

Sometimes, you may be so clueless - but have potential - that an elder may need to shake you a little hard to wake you up. That can even be a good thing; it's when they ignore you as irrelevant that you're really in trouble.

My previous post "Um, Er ... what" was not intended to be an excuse to whine and disengage when we are challenged. Instead, it was designed as a tool to help recognize when we're reached a point that dialogue was failing - especially when you have that moment, and realize that you might know more about the subject than they other party, and they resort to a "trick of rhetoric." I hope, between both of these posts, to have provided a balanced view on the subject.

But what do you think? And what did I miss?

Friday, June 19, 2009

Er, Um ... What?

//Meta: Unlike most of my blog posts, which I try hard to have concrete, detailed, and well explored before posting, this idea is not fully-formed. I wanted to throw it out to the web for feedback early. I hope you find it at least a good use of your time.

In addition for working for a distributed software company, I also write a column from home for a company is, well ... everywhere. The STPCollaborative is also distributed.

As such, I spend a considerable amount of time on written communication - articles, wikis, e-mail, blogs, and such. Plus, because I teach at night, my ability to attend conferences is limited, so I've increased my involvement in discussion lists, forums, and other on-line conferring.

And I've noticed a trend.

Some people on these boards are familiar with logical fallacies - interesting rhetorical devices that may make a strong-sounding argument that do not actually hang together.

The classic example of a logical fallacy "call out" is where person A makes a point that does not hang together, and person B, catching it, responds with some fancy latin, such as "Ad Hominem" or "Post Hoc Ergo Propter Hoc" - you've likely seen that before. At least the guy is caught.

Yet there are a number of other techniques that I find unacceptable - or, you could say, techniques someone can stoop to in order to win the argument that I find distasteful. Here are a few:

1) Appeal to Goodness, thinly disguised
The other party uses a vague word to say "that's not good"

Examples:
"That's not Agile!"
"A high-maturity organization wouldn't have such a problem"
"I don't see how that's you could deliver a high-quality system with that approach"

Counter: "So what?" or "Why not?"

2) Retreating to big words or hand-waving
In this case, the author uses more words that are not well-defined or agreed upon in order to mollify the questioner.

Example:
"How do I define a high-maturity organization? Why, one that has a quantitatively managed process and a very low degree of process variation, of course."

This one is very hard to counter, because the author is substituting one vague, big word for a half-dozen. Chasing down those half dozen is going to be hard.

Counter: Ask for details and examples. "Could you give me an example of a low degree of process variation, and how you measure it? It occurs to me that process is multi-dimensional - how many different aspects of the process do you measure the variance of, and what are some of them?"

3) Insistence that you "just don't get it"

Those most obvious form of this is simply stating "you just don't get it" as a reply to a reasoned and logical question. More subtly, it can be used by insisting that there is a rich and deep body of knowledge which the reader has to absorb before being capable of engaging you with meaningful questions. (This is a fine line, because it's reasonable expect our colleagues to read some background material before entering a discussion.)

Example:
Contributor: "Uh, I know Joe is an architect, and all, but I really don't see what value that role adds to the project."
Manager: "You just don't get it, do you?"

More subtle example:
Q1:"have to admit, I'm interested and getting less skeptical over time. At my current shop, we use a scrum-like process managed by a wiki, but I admit it's generally "push" instead of "pull." It seems to me that moving to pull is more about attitude than process, and I am interested in seeing more details of implementations."

A1: "Welcome Matt!

I think implementing pull is a lot harder than perhaps you think. While attitudes have to change, pull is not simply an attitude adjustment, it highlights real problems quickly and it's this focus on exposing problems early that requires an attitude adjustment from everyone on the team. I often refer to this as 'Kanban makes you take the pain early and often, rather than deferring it until later.'"

(Read A1 Again, really carefully. Did you catch the hand-waving?)

Q2: "Thanks for correcting me, now could you also educate me?"

A2: "In fairness, you didn't ask a question or solicit any "education." The description for the group provides references to two specific papers describing kanban case studies and also a link to a recommended reading list. Did you take the time to read the group description and follow those links?

I think it is a reasonable assumption for a moderator that new members would at least have read the description of the group before joining.

Groups like this have to self-help. There cannot nor should there be an obligation on the moderators to "educate" anyone. While members aim to be helpful and co-operative, open and collaborative, there is no obligation to "educate."

Eric points out that a lot of new members struggle to catch up. To those who feel that way, I'd ask the same questions, have you read the description of the group? did you follow the links? have you read the original papers on the Microsoft and Corbis case studies? have you read any of the other recommended reading?

It will get awfully tedious in here if we have to regurgitate the foundations every couple of weeks for the new members."

Commentary: This isn't being reasonable to limit intrusions into the group. It's a kind of bullying behavior. This is a little cut/paste from a discussion list, and yes, If I had to do it over again, I could have improved the tone in my comments. It cuts both ways.

4) Insistence that the reason you don't get it is because it is "hard"

Classic Example: "Oh, ha ha ha, I don't know if anyone can really understand real options without a Ph D understanding of economics, mathematics, and physics."

Commentary: This is worse than you don't get it. At least with "you don't get it", you can go read the case studies and say "in light of my previous questions, my comments still stand."

With insistence on "hardness" you are left with nothing, because in order to engage you'd have to spend nine to fifteen years in graduate school.

What this person is really saying in this example is that no one is in a position to challenge them about their statements.

As someone once said, the true genius makes the complex seem easy to understand; while the charlatan makes the easy to understand appear complex.

5) Abuse of the Socratic Method

In the Socratic Method, one person assumes the role of "teacher" and asks a series of questions to lead the student to a conclusion. It bogs down when the student actually understands the subject well. When the student has not granted the teacher any authority or role as teacher, it can revert to verbal bullying.

Example:
Person A: "I'm curious, why do you do practice X?"
Person B: Detailed and exhaustive reasons to do practice X
Person A: "But doesn't that leave you open to risk X?"
Person B: Detailed and exhaustive risk mitigation plan for risk X
Person A: "But what about ..."
And so on.

I don't have a great counter for the faux Socratic method. One counter is to say "this seems a lot like the Socratic method. I'd rather not have that kind of a discussion. I will, however, respond to a position you take."

6) Aggressive questioning

Example:
Person A: "Why would you ever do that?"
Person B: Detailed and exhaustive reasoning why they do that
Person A: "Hey, no need to get defensive, I was just asking questions."

Commentary: When this is done, I think it's rarely done with intent. Person A actually means it when they say "hey, no need to get defensive" - they just might not realize that the very question they were asking was requesting a defense.

The counter to an aggressive question is another question, such as "why do you ask?"

7) Appeal to irrelevance

This one is more common in the real world. Examples:

In a private email, one person said they had a co-worker insult or mock them for "finding obscure blogs and testers that 'no one has ever heard of and then calling them testing experts'"

A real experience I had: I recommended a consultant to a group, and the hiring manager replied "Never Heard Of Him." After a couple more names, I asked who he had heard of, and he looked at me quizzically and said "shoot, I guess there aren't any real experts in software development, are there? Maybe that Fred Brooks guy, I guess."

Counter: For the first issue, which is really just a thinly-disguised insult, I'd reply with comedy to make the insult more obvious. For example: "Your fly is open." To the second, "Who have you heard of?" lets the other person realize that he, in fact, doesn't know anyone, so "never heard of him" is not a fair statement.

8) Changing the subject

Most popular with politicians, when asked one question, they respond to a different question.

Example On Video or Another One

Counter: Continue to re-ask the question until the subject answers it - or it becomes clear the other side will not answer it, with the implication being they can not answer it in a straightforward way. (Unfair questions with a hidden premise, such as "have you stopped beating your children yet?" should be dealt with directly; challenge the premise.)

Conclusions
A big problem with these statements is that they often work. The challenged person is likely to look bad, be embarrassed, give up, and go away. As such, out-of-line person is rewarded for this behavior. Things we reward, we'll see more of. If we want to see less of it, we've got wade through the crap and press on.

Those are just a few behaviors I find unpalatable, try to avoid, and try not to reward. What are yours?

Monday, June 15, 2009

The Boutique Tester

Generations ago, craftspeople lived in the center of town, owned the building, and lived upstairs. They generally owned their own tools. Independent craft was such a part of their being that when it came time to pick up last names, they took up the last name of their craft: Cooper, Miller, Smith or Carpenter.

A few hundred years later, when it was time for young Matt Heusser to start his career, that independent spark was all but gone - at least from software development. Oh, it still existed in some of the trades; you can still find an independent plumber. But even then the American people had begun to declare war on work; as Mike Rowe put it, we relegated that plumber in our minds to a 300-pound slob "with his butt crack hanging out."

And at the time (1997), it was very hard to be an independent software contractor. To distribute software you needed to print CD's, stuff them in boxes, and market then to large retail stores. The few people who were on-line were using dialup modems and wouldn't stand to download win32 software. Building a web presence was expensive; you needed to build a server farm, rent a T1 connection, and hire an army of developers, DBAs, system administrators, analysts and testers. The few craftsmen making shareware and open source software were hobbyists - it wasn't expected to pay your rent. Why, even the methodologies popular at the time pushed you toward an assembly-line of specialists.

While you could work as an independent contractor for companies, the idea of making things for yourself just wasn't part of the scene. Huxley was right; the brave new world had come.

... time passes ...

Today it's 2009, and I again see a world where software craft is possible. Computers get faster and cheaper; now I can rent a box that will run PHP, stuck inside a colo, connected to the internet, for a hundred bucks a month. A majority of Americans have a high-speed internet connection, and the web has evolved to nearly match the functionality of traditional windows programs. Free programming tools, at higher and higher levels of abstraction, combined with methods (like XP) that focus on the generalist allow the programmer to be programmer, developer, tester (?) and system administrator at the same time.

All of a sudden, it does make sense to hire one guy (or maybe two) to write your website. Custom software development - and the craftsman - is back, baby.

And back big time. The Ruby on Rails movement alone is full of small companies like 8th Light, Obtiva, and Atomic Object that have generalists making custom software.

I believe this is a real good thing.

... but what about the tester? Why, the software tester doesn't make anything. The tester is just part of some assembly-line. That's a job that should just go away as we all become generalists, right?

Well gosh, I hope not. Sure, I've done development. I've done analysis. I've been a generalist responsible for everything. It's just that I enjoy testing. It's what I want to do.

So if we have found a space for the boutique developer, can we find a place for a boutique tester? And, if yes, what would that look like?

I believe the answer is yes and no. To compete as a craftsperson, the tester role will have to evolve. He'll have to be smarter, sharper, faster. In the boutique world, he will have to explain his services to people who are skeptical of such services and believe they can do it themselves. In the words of Harry Harrison, in his novel the stainless-steel rat:

It was easier in the old days, of course, and society had more rats when the rules were looser, just as old wooden buildings have more rats than concrete buildings. But there are rats in the building now as well. Now that society is all ferrocrete and stainless steel there are fewer gaps in the joints. It takes a very smart rat indeed to find these openings. Only a stainless steel rat can be at home in this environment.

So what would a stainless-steel (boutique) tester look like?

Imagine a development project that is outsourced to one of these boutique dev shops. The programming budget is in the area of at least $50,000, and also has an outside design firm and internal costs. The total cost of project is probably in the $100,000 range. (I'm not making this up; this is the range of budget typical for a project for the consultancy "hash rocket", another boutique rails shop.)

Now, imagine the project is halfway through. The customer begins to be concerned with functionality. This is a make-or break project, they explain. Perhaps the customer is a media outlet, like NBC. They start to talk about how it "has to work" and legal implications. The development staff, a bunch of craftspeople, start to hear about contracts and clauses.

What's does CEO of a shop with SEVEN employees do now?

Hopefully, he tells the customer that the logical thing to do is to hire a tester; someone independent, who can make an assessment of the software. Someone they can air-drop in to mitigate risk for two to five percent of the development cost. After all, if the customer is so worried, why not spend $5,000 on a $100,000 project to mitigate risk?

This means the tester isn't going to moan that they were not involved early or insist on detailed documentation - they will have to actively contribute to the project right now.

Perhaps, over time, this service becomes so valuable that the development shop plans on using a tester as part of it's risk-management strategy in general. Sure, the devs will do test-driven development and perhaps even automate story-tests for the customer. And the final layer - the piece de la resisstance - is the air-dropped tester.

With a few shops to work with, it's possible that tester could create his own boutique test consultancy. There are already a few people who do this sort of thing; stainless steel testers in the maze.

Keep in mind - testing as a profession is not going anywhere. There will be plenty of testing roles in larger organizations in the years to come. But is a testing boutique possible?

I sure hope so.

Alan Kay, the man generally credited with object-oriented programming, once said that the best way to predict the future is to invent it.

Let's go prove him right.

Friday, June 12, 2009

Risk-based testing and the Bowl of Fruit Problem

I've heard this term lately - Risk-Based Testing. The idea, is, essentially, to prioritize your tests by risk, and do the riskiest (and most painful if it fails) things first.

If you think about it, that means finding the tasks that have the highest bang for the buck - and doing them first.

Now isn't that just plain good testing?

Or, to put it a different way - can you think of a form of good testing that does not consider risk?

I brought the question to the twittersphere this morning and got some interesting replies. Ben Simo and Ron Jeffries pointed out that Acceptance Test Driven Development, and some implementations of TDD, often don't address risk.

Is it fair for me to call that "bad testing"?

Well ... maybe. It depends. It's probably time for me to introduce the Bowl of Fruit problem.

Imagine a Bowl of Fruit. It has a lot of things in it. It's got some bananas, some grapes, some oranges. We all like the bowl of fruit.

We got to Fruit conferences. We get up in front of people and talk about the Fruit. We argue a lot.

And, suddenly, I wake up one morning and realize that you are interested in grapes and I prefer bananas.

That is to say - we keep using this word 'test', but we get different value from it.

Some people value testing as a form of risk management - as an investigative activity to enable decision makers. Others are more interested in using tests for a different purpose.

For example, Acceptance-Test-Driven-Development folks might be more interested in exploring and communicating requirements than they are in critical investigation. Developers using TDD might be more interested in enabling refactoring or to help explore the design or API of the software.

In both those cases, the person is talking about 'testing' but not particularly excited in risk management. Oh, they might be interested in risk management, and appreciate it as a side-effect, but it's not on the top of the stack. They are interested in the grapes, not the oranges.

One way to tell this is my the language used, as inevitably you'll hear something like "... and it's not just testing, you also get (benefit x)."

Nothing's wrong with that, except perhaps using "just" as a pejorative, which minimizes it's impact. I, personally, am interested in "just" testing - testing for it's own sake - as a part of the value proposition of delivering working software, which is the super-goal. (Or making money, having a fulfilled life, and other meta-goals.)

But when we focus on other attributes of the bowl-of-fruit, we shouldn't be surprised that risk-management isn't covered well. So, you might say that aspect of software testing isn't covered well - and that aspect (the one I care the most about) - is done poorly.

My take-aways:

1) One thing I think the "risk based testing" movement /has/ done is move the conversation toward making explicit and conscious trade offs about risk, instead of doing them implicitly. Another is to provide tools to people who might not otherwise have them. In that, I think it's a good thing.

2) Instead of arguing about approaches or words, we can instead start by focusing on the goals of testing. If someone has different goals than I do - well - of course they'll come up with a different testing strategy. And that might be just fine.

Note 1: Thanks to my colleague and friend Sean McMillan, who introduced me to Bowl of Fruit problem with regards to software requirements. The original idea, as far as I can tell, came from Collaborative Game Design Theory.

Note 2: Please don't mis-read this to mean "Heusser thinks ATDD or TDD are bad testing." When, as a developer, I've used TDD, a large portion of what I used it for was risk management. As a tester or PM, when we used ATDD, a large portion of what we used it for was risk management. But then again, I am actively interested in risk management. Some people have ... less interest.

Tuesday, June 09, 2009

On Yard Work

This weekend I spent a fair amount of time working in the backyard.

When I work in the yard, I use different tools depending on how much work I'd like to do. I may start by bagging trimmings, but soon I'll need the shears, then the chainsaw. Likewise, when I take out weeds, I start with a spray, move to the weed wacker, and eventually the lawn mower.

Now, the lawn mower and the chain saw - these things make me more effective. They make me faster. They extend my reach and give me power. Yet I wouldn't call them "automated landscaping." That would be silly. Nor would I call a pair of shears, which is a form of machine "partial automation."

They are tools. I use tools to get the job done. No one talked about automated lanscaping because it's a silly idea.

Yet ... have you heard of the Roomba? it is a vaccum cleaner that, supposedly, you can simply set it your room, press a button, and walk away. It'll do the vacuuming automatically.

And they even mow lawns.

Oh, of course, you have to bury guide wires in the ground to make sure the machine doesn't wander off into the street, and likewise to make sure it doesn't mow over your tulips. And I'm sure it's pretty stinky about cutting in areas with rocks and other corners.

That's about how I feel about test automation:

1) If /you/ are the one doing the driving, it's possible to use tools to extend your reach or go faster,
2) If you are not - if you want to have the testing work unattended ... well, it's possible. You have to spend a lot of effort doing plumbing tasks. If you have a GUI, the end might not look so great - so make sure you personally inspect the boundaries.

That's just my quick thought on a Tuesday morning. What do you think?

UPDATE: Last week I sat through Scrum Training with Ron Jeffries and Chet Hendrickson. It was awesome - amazing - and I'm happy to do a blog post on it. Ron and Chet made a serious and reasoned case for Acceptance Test Driven Development - to create acceptance criteria for every story and automate those tests as regression tests. They also provided specific techniques to make the cost of that plumbing cheaper. I grant that if you don't have a GUI, or if the wiring on the GUI is trivial and you can "get behind" it, the ROI of Automated Acceptance Testing might be reasonable in many situations - you might just want that Roomba-for-the-lawn. AND you'll still want to inspect some things personally. c'est le vie.