Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Friday, May 23, 2008

Unit Testing ++

(Yes, I need to follow up on tech debt. Still ...)

Not to many years ago, Brian Marick pointed out that you can tell when a product has "crossed the chasm" into main stream when the first non-introductory books start coming out.

That was 2005, and he was talking about Test Driven Development.

I just found "The Art of Unit Testing", a book by Roy Osherove to be released in November 2008. The first chapter is available on-line.

This is going to be a good book, and I am super-excited about it.

Sidebar: For years, I've been saying that the developer-test community has a great deal to learn from the traditional test community. I've blogged and published about. it.

The book introduces unit testing with different definitions, starting with this definition for "classic unit test":

A unit test is a piece of a code (usually a method) that invokes another piece of code and checks the correctness of some assumptions afterward. If the assumptions turn out to be wrong, the unit test has failed. A “unit” is a method or function.

This is introduced with no references. No Glenford Myers, no Boris Beizer, no Bill Hetzel. Just a statement above about how Kent Beck invented unit tests in smalltalk and popularized it for other languages.

Let me be clear: I think this is a cool idea. I think it'll be a good book. AND I think the (clueful) traditional test community has a ton of lessons learned that we can share with developer/testers.

Does it all apply? Well, no. Some of it will. Again, let's not throw the baby out with the bathwater.

Wednesday, May 21, 2008

My position on Tech Debt - I

For the workshop on technical debt, everyone has to write a position paper, give a case study, do a talk, OR do a tutorial. ("One of the above.")

That includes me.

So I have decided to write two position papers, and, maybe, show some of the process on Creative Chaos.

The first position paper will attempt to describe the systems issues in technical debt - why it happens. The second will be about communication and metrics.

Here's my first thought about the systems issues:

Some things to consider about tech debt:

1) Our educational system, for the most part, is built on one-off projects. Students build a program that inventories their CD collection. It doesn't work well - it even has some bugs - but it is good enough to demo and get a B+. I would go so far as to say that the majority of undergraduate programming assignments can get hacked out in a weekend with a lot of Pizza and Caffeine. And, if not, well, you can always turn in some absolute garbage, get a D for the project, a C for the class, and a B- overall for the semester.

This means that students never have to live with the pain of maintaining the pile of mud they write. Thus, our first exposure to programming actively rewards us for tech debt.

2) Technical people absolutely stink at communicating about tech debt. First, we get grandiose ideas about generalization and abstraction that management does not (and should not) trust. So when we whine about how we need more to "fix this right", what is management to think but that we are just creating yet another castle in the sky?


3) Technical people have more power than they realize. The easiest way to prevent a clever hack is to not make it. Sure, you might not be the "go to" guy anymore, but recognize what the go-to guy gets: The toughest projects. That, and pigeonholed into a technical career path.


4) Non-Technical People don't realize what they are asking. Sure, the project manager asks if there is "anything you can do" to go faster, and has no problem with you taking "shortcuts" - but when the code falls apart in production, is he going to stand up and say "I authorized that, I knew it was risky, it was my choice." Or is he going to say "I asked him to be fast -- not irresponsible." Catch the different there?

So don't be irresponsible. See point #3.

4) For the most part, North American business is optimized to create short-term results at the expense of the long term. In other words, when you hear "Just do it quick, we'll do it right later", it is fair to ask if the company ever - *EVER* - actually does it later. If the answer is 'no', recognize that "just do it fast for now" means "just do it hackey for always."

Finally ...

5) A lot of old-school, "get everything right up front" people are going to use tech debt as a tool for insisting on Big Design Up Front. That is not my goal. I see BDUF projects creating a bunch of documentation and plans and things that need to updated. That up-dation is drag on the project - or, in other words, interest. Compared to other schools of big, heavyweight design, my approach leans toward "I could do the whole dang project before you finish your design."

One easy way to move fast is to travel light.


Which leads to -

6) Just Do It. If I all the time spent whining and complaining that we need to "do it right" were thrown into just doing it - if for just a couple of weeks we gave up coffee breaks and the watercooler and spent time just plowing throught code - we could get it done.

Overall conclusion: Courage is a cardinal virtue for a reason.

Next question: How do we change the system so that courage (and doing the "right thing") is encouraged and rewarded?

----> That is my rough and sloppy 1.5th draft. If someone were to attend the workshop with just this, it would be the minimal amount of effort required. Of course, I intend to improve it.

But who cares about that. What do you think of my talking points?

Friday, May 16, 2008

The Tri-Cities in Washington State

My Mother and Father met when she was teaching and he was working in tri-cities area, and as a child I spent many happy summers there with my Grand Parents.

This morning, I saw that a company in the tri-cities is currently hiring an Agile Tester, and they are covering relocation!

Here is a link to the post on the agile-testing yahoo group.

Washington is an interesting place; you've got skiing just a couple of hours away from rain just a couple of hours away from desert. The tri-cities are pretty much desert, but they have an amazing water pipeline; everyone has a sprinkler system in yard. The warm, arid climate is extremely good for you.

If you apply for the position, and it works out, please let me know. If you are a real testing geek, you can go to the corner of Leslie and Gage Road and say "That is the same 7-Eleven where Matt Heusser drank slurpies and played Ninja-Gaiden in 1989!","And that's where the fireworks stand used to be!"

Then again ... maybe you won't.

Still, it appears to be a good company, and because they are offering relocation to people qualified to work in the US, I thought it would be interesting to a large percentage of Creative Chaos Readers.

(Keep Scrolling Down! I am still looking for feedback on the encyclopedia article.)

Thursday, May 15, 2008

Testing Encyclopedia - I

Meta: I am currently working with Chris McMahon on a series of short articles on testing terminology. Of course, we covet your feedback, and would like to open them up to the world for review. Your comments are welcome ...

Each month, we takes testing jargon and puts it in plain English, allowing you to respond to questions like "Did you exhaustively test this?", "Are we doing an SVT after our BVT?", or Does the performance testing pass?" Our first installment is on fundamental concepts in software testing.

The Impossibility of Complete Testing
Imagine the simplest of computer programs: One that takes a given date and generates the next calendar day; we call this GetNextDate. Assuming the software works for year 4000BC to 4000AD, there are 365*(4000+4000), or 2.9 Million possibilities – and that is assuming the dates don’t include timestamp, and that does not include any error cases.
The typical computer program is much more complex than GetNextDate(), and quickly spirals to billions and trillions of possible test scenarios. One of the central challenges of software testing is that we have a finite amount of time to run an infinite number of possible tests. As Bill Hetzel once put so well "The only exhaustive testing there is is so much testing that the tester is exhausted! "
Many approaches and techniques are designed to overcome this impossibility – to determine the cheapest and most valuable tests, and how to use the information those tests provide to assess the software.

The Mission Problem
Once your team determines the impossibility of complete testing, the next logical step is to identify the goal, or mission the team can strive toward. Two common missions are the bug hunt – to find the most valuable bugs quickly – or to assess the readiness of the software to ship.
While they may sound similar, those two missions can lead your team in different directions. For example, in a bug hunt, the team may go after the most broken and defect-laden modules, while an assessing team might state that the module is known defective and move on to assess other modules.
Other missions include complying with documented process or documenting work-arounds prior to shipping the software. Over time, a team may have many different missions. An implicit or vague mission could mean that your team is not doing what management expects, which can lead to conflict, friction, and strife.

The Oracle Problem
In order for a test to pass, the tester has to know what the "right" answer is. For a complex financial, graphical, or database system, the right answer might not be obvious. The classic example of this is trying to test a word-processor – how do you know that the 12-point font on your screen is actually 12-point?
If you have documented requirements, these can serve as an Oracle. Other popular oracles include "the product owner said so", "The developer told me that is correct", "That is the way our competing product that we are supposed to be compatible with does it.", and the dictionary.

Equivalence Class Partitioning (ECP)
ECP is an extremely popular way of taking the problem of complete testing and limiting it to a reasonable set of test cases. The idea behind ECP is to break all inputs down into classes, then test at least one of each class. In our GetNextDate example, we might break problems into:

The First day of a New Month
The Middle of a Month
The Last Day of a Month
The Last Day of a Year
The 28th of February, Non-Leap-Year
The 28th of February, Leap-Year
The 29th of February, Leap-Year

We could also add test cases for BC, AD, and the boundaries between the two. This technique allows us to go from millions of test cases to a few dozen, but it is not perfect. Equivalence Class Partitioning does not address malformed dates such as the 32nd of January or the –1st of April – nor will it catch bugs created by local optimization. (More about that next month)

For an in-depth introduction to testing concepts, we recommends the Fundamentals of Testing course offered by the Association for Software Testing:
Future Articles will cover testing process, test automation, exploratory testing, and coverage in more depth.

Thursday, May 08, 2008

Professional Service Firms and Twitter

No, really, those are two different things.

Yesterday I read this article on solving the IT turnover crisis. The basic idea is to look at how professional service frims like Earnst&Young do their hiring and staffing.

I've worked with quite a few people who spent time at that type of firm. Basically, everyone is running around, working very hard, collecting lots of billable hours, trying to make partner. A few people make partner, but many more decide it isn't worth it after a few years and bail out.

At those firms it is expected that less than half of your peers will be here in five years. For that matter, having a big accounting firm on your resume is considered very valuable; so you can travel and work for a big firm when you are young, as a consultant, then settle down into a corporate job once you have more responsibilities. (Mortgage, Family, and so on.)

Offhand, I can only think of a couple of software houses that work this way - thoughtworks and objectmentor. Thoughtworks in particular has produced people like Jason Huggins (Co-Creator of Sellenium, now at Google), Steve Freeman (Co-Creator of Mock Objects, now independent) Simon Stewart (Author of Web Driver, now at Google), and Chris McMahon (Inventor of testing heuristic "Don't test for blocking conditions", now at Socialtext).

Thoughtworks even has an alumni blog. "Look at what the smart people who used to work for us are doing now!"

Overall, to me, the model makes sense - work really hard for us for a few years, build a reputation for yourself, then either go into the world and succeed, or, if you can bring in clients and don't mind traveling, stick around and do very well.

I've long believed that companies that say "Document in case you get hit by a bus" really mean "document in case you get hit by a better job offer."

It sounds like Thoughtworks is at least one company in our field that manages that way.

I wonder what the world would be like if we saw a lot more of that?

Secondly, about twitter. I just joined twitter, the lightest-weight-est blogging format on the planet. Posts are limited to 200 characters in length and are generally one sentence long. You can see my profile here. Yay!

Tuesday, May 06, 2008

Four Testing Strategies

I've spent a good deal of time lately thinking about how we frame the problem of software testing - and how we solve it. It impacts how we see the world, and how we treat each other. Over the weekend, I came up with four fundamental strategies in software testing, which I considered writing up as a blog post.

The thing is, blog posts are one-way; I dump a bunch of stuff at the end. Sometimes, if a comment is particularly insightful, it goes in the UPDATE section at the bottom. Or, if a magazine will let me, I might put a first draft of an article here and incorporate your comments.

What if we all could contribute to such an article? What if we could add, remove, update, delete - all version controlled, with possibly even a back story?

There's a tool for this - it's called a wiki. Oh, sure, wikipedia is incredibly popular, but there are many, many wikis that delve deeply into a specific content area. Rahul Verma, an Indian Tester, has even made a free wiki for software testers - the Testing Perspective Wiki.

So, instead of beating myself up over a perfect article, I put up a short piece on the Testing Perspective Wiki, titled Fundamental Strategies in Software Testing.

Is is perfect? Certainly not! Why, it doesn't even have references yet. Have I missed a few common strategies? Probably. That is where you come in - you can create an account on the wiki, sign in, add or change content to make this a stronger article. Or add new articles to make it a stronger reference.

In fact, the Testing Perspective Wiki is basically a clean slate. Besides a couple of articles, the whole wiki is open.

So please, check out my little piece on Fundamental Strategies ... then leave your own.

If the 1,000-odd monthly readers of this site were to each write one page of text on testing and review two more, we'd essentially have created a book for the community.

Wouldn't that be a nice thing to do for the world?

Thursday, May 01, 2008

The Wikitest Framework

UPDATE: I want to do a post on the importance of limiting ambiguity and making sure you have as much meaningful consensus as possible - a counterbalance to my last post. Still, that's going to be a whopper. I also want to respond to some of Shrini and Llya's comments. Again, those will be be long. So, here's something in the mean time.

I put this post out to the Agile-FTT list this morning, and I thought it was worth sharing here:

...I think it's about time I talked about my new gig. My company, Socialtext, produces enterprise wikis that are secure, hosted if you want, appliance-delivered if you don't. We have a test framework called wikitests that we give away free. Wikitests are a non-technical-user friendly framework that sits on top of selenium that enables you to express tests on a wiki page that is freely editable and version-controlled.

With the hosted product, setting up wikitest is as easy as making a page like this:

(New Page - "Test Shopping Cart for Foo")
| open_ok | |
| type_ok | search | foo bars |
| click_ok | search-btn | |
| text_like | Search Results for 'foo bars' (24) |
| click_ok | link=Foo Bars 8 Oz. Size |
| text_like | $10.00 |
| text_like | case of 24 |
| click_ok | link=Checkout |

Now, I don't claim this tool will solve all your problems. It forces someone to actually write specific tests tied to GUI components, probably using a tool like firebug. If you want to do quick tests, stress tests or load tests, it will probably not provide a high amount of ROI. I do claim that it's a nice mixture of the fitnesse feel with selenium keyword-driven goodness. (I am not a sales guy, I am a test lead, fwiw.)

Because it's based on selenium RC, you can fire up the server and watch the screen scroll by - you can also set the speed. This means that if the GUI is tied to your computer, you can watch the test fly by, which can be helpful in finding rendering errors.

I'm going to work on a demo for it, hopefully next week. In the mean time, if you'd just like to explore a cool, free, hosted wiki, there is a free trial version on

Finally, Al, my company, Socialtext, is a Palo-Alto based company currently seeking a product manager. Shoot me a resume.

--matt heusser