Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Wednesday, March 28, 2007

To XP or Not?

I just posted this to the Xtreme Programming Yahoo group, and thought it was worth sharing. I start out quoting David Winslow ...

I want this project to succeed and I want to do the right thing. With that in mind I am punting for us to develop an agile xp approach to the project. However I am meeting a lot of resistance from the stakeholders as they don't understand the XP process.

My suggestion is this:

Don't call it XP

Don't call it Scrum

Instead, call it "delivering small pieces of finished work on very tight timeframes", or, if you must "lean software development" (or Toyota.)

Pair Programming, Test Driven Development, and refactoring are just plain good engineering practices. You don't need to ask permission to do them.

-----> I once worked on the "after project" where the "before project" took three years, technically worked, but didn't meet the customers needs.

At the first "requirements elicitation" meeting, I pulled out index cards and started taking notes. Some people felt uncomfortable, and I replied "After I finish these cards you will review them and sort them - and I will DELIVER the first card on the stack, COMPLETED, in a month - possibly with more."

All of a sudden, the room got very quiet. Then the customer/manager replied "ok then. Let me see those cards."

The rest, as they say, is history ...

Wednesday, March 21, 2007

Anti-intellectualism in Medicine ...

From last week's US News & World Report:

I had become aware of misdiagnoses of family and friends. I was teaching three years ago, and I found that many of the students were very smart. But they were latching on to these algorithms, making very quick judgments, and following cookbook-type recipes for diagnosis and treatment. I thought to myself, "How can I teach them to think better? How is it that I have made misdiagnoses, and senior colleagues of mine sometimes miss very important diagnoses?" To do that, I had to understand how doctors think.

Read the entire article here.

AST has membership cards!

In an email last week, I jokingly referred to myself as a "Card carrying member of the Association for Software Testing."

I got a membership card in the mail the same day. Number 137, to be exact.

The cards are black-and-white with the grasshopper logo, and a "member since" date.

Oddly enough, in 1996 I founded an honor society for cadet officers called the "Ira C. Eaker Association", or ICEA. We had membership cards too; you can see one on-line here.

Sadly, the ICEA has been out of business for several years now. basically fulfills the purpose we set out for ICEA, and, actually, has pretty much the same people on staff.

Ok, Ok, maybe the AST doesn't need a color membership card - although the ICEA ones were nice. Maybe a challenge coin instead? :-)

Here's the serious part, folks: All of those little things are sign of profession. That is to say, a professional tester is someone who says out loud (professes) that "tester = me."

Do you profess to be a tester? How would other people know?

Testing Challenge - II

By far, my Testing Challenge post has been my most popular; traffic literally doubled even though I slowed down my amount of blogging.

Despite the increase in traffic, I got very few replies; it makes me wonder if some people were coming back to the site often, just to check for replies ...

It's taking some time, but replies are starting to dribble in. In addition to the comments, Elisabeth Hendrickson has a post on her blog, and Michael Bolton put on up as well. There's even been a little discussion on the list for the Association for Software Testing.

Still, I'd like to see a few more replies before I spill the beans. I know there are some regular readers who haven't replied ... are you up to the challenge?

Wednesday, March 14, 2007

Tuesday, March 13, 2007


I'm currently working on a reply to the testing challenge; I think you deserve more than my current, sloppy-english response.

In the mean time, a couple of very interesting updates:

1) Last week I read Brave New World. The link leads to the free, on-line edition of this 1920's classic. The novel is dis-topian fiction; it portrays a world that is highly socially engineered for happiness - and completely devoid of innovation, intellectual development, and meaning.

A couple of my favorite quotes:

The Savage was silent for a little. "All the same," he insisted obstinately, "Othello's good, Othello's better than those feelies."
"Of course it is," the Controller agreed. "But that's the price we have to pay for stability. You've got to choose between happiness and what people used to call high art. We've sacrificed the high art. We have the feelies and the scent organ instead."

- Huxley, Brave New World

"Yes; but what sort of science?" asked Mustapha Mond sarcastically. "You've had no scientific training, so you can't judge. I was a pretty good physicist in my time. Too good–good enough to realize that all our science is just a cookery book, with an orthodox theory of cooking that nobody's allowed to question, and a list of recipes that mustn't be added to except by special permission from the head cook. I'm the head cook now. But I was an inquisitive young scullion once. I started doing a bit of cooking on my own. Unorthodox cooking, illicit cooking. A bit of real science, in fact." He was silent.
- Huxley, Brave New World

So, fifty years before I was born, you have Huxley worried about the cult of stable, predictable, repeatable. Interestingly enough, Huxley was a european who wrote the book on a trip to America, mostly in concern over what Henry Ford was doing up in Detroit ...

2) This week I read The Great Divorce by C.S. Lewis. I'm afraid that this one came after Mickey Mouse, so the copyright is enforcable. If you want to read it, you'll have to check it out of the library. It is a quick read; you can probably devour it in one sitting, and probably will. The book depicts how the choices we make define who we are - and who we will be. The book is inspired by Lewis's Christian Faith, but it is neither quite protestant nor quite Catholic. I think, most of all, Lewis wanted his audience to think.

Oddly enough, The Great Divorce was designed as a response to William Blake's The Marriage of Heaven and Hell. In this book, Blake used the term "The Doors of Perception", which so inspired Huxley as he named a book after it. (And Huxley wrote the first book in this post. Odd ...)

In any event, these two books, "Brave New World" and "The Great Divorce" are going up on my list of books for intellectual development, right with up with Starship Troopers, Atlas Shrugged, The Chronicles of Narnia, Orphans of The Sky, Phule's Company, and Mere Christianity.

That was my quick, sloppy list, and it is a very odd list indeed. I should think on that ...



Thursday, March 08, 2007

Post to the XP Discussion List

I just posted this to the ExtremeProgramming List; I thought it was worth sharing ...

>Fortunately I think this is changing rapidly, and there is a growing
>body of experience on how to apply UCDish methods agilly to help with
>the "Buiding the Right Thing" part.

Heh. I know what you mean. It seems to be that most requirements descriptions are either:

A) Definitional. You leave having memorized a bunch of words, and this decreases some communication friction, but you may not know how to apply them. For example, the students learn the difference between implicit and explicit requirements, but do they really know how to help pull ides out of some one's head and get them on paper? Probably not.

B) Process-Focused. Do X, Y, and Z - "First interview the customer, then write the requirements, then have a review step" doesn't tell anyone how to do those things *WELL*. In my experience, a lot of the process literature focuses on change control, like the planning game, and not the requirements "gathering" part.

C) Template-Driven. "Just fill out this template, and you'll be fine." This is all kinds of bad for all kinds of reasons. Joel Spolsky does a pretty good job of describing this in this article: (Scroll to the bottom)

In my honest opinion, there is a real lack of real good requirements advice in the software literature. Jerry Weinberg has a good book on requirements, and he used to even do workshops, but he is now semi-retired and focused on writing, not software requirements.

So, me? I am doing something about it.

I will be presenting a workshop on how to figure out what needs to be built and communicate it in Indianapolis next month:

If you are interested, I am now booking for 2008 as well ...


Matthew Heusser

Thursday, March 01, 2007

A Testing Challenge

As a fan of incremental/iterative methods, I like the idea of test automation.

In theory, everything should be retested every release, but with two-week iterations, that simply is not going to happen. With test automation, we can at least have some confidence that the software didn't introduce any major regression errors outside of the obvious features being changed.

So, it would be really nice to have a computer with a big button that says "test" that runs all the tests and comes back with a green (or red) light. For unit tests, I use these in spades.

The problem is customer acceptance tests. There are some tools for automating acceptance tests: Most notably FIT and Fitnesse.

Fit and Fitness take a set of inputs and compare that against a set of outputs. To do this, they have 'fixtures' that call into the application, call a function, get the results, and compare that to output. Since Fitnesse is written in Java, it can have 'hooks' into your application -- if that app is also written in Java or a language which FitNesse Supports.

This can work for standalone applications; logical silos that take an input, transform it, and provide an output.

Now the challenge:

----> Lately I've been working with IS Shops, not software companies. These are organizations that support a traditional, brick-and-mortar business. Instead of producing standalone apps, these organizations are more often integrating two applications.

The software that is being testing isn't the app itself (that ws commercial-off-the-shelf) but the data bridge between the apps.

For example, one application pulls data out of an ERP system, stores it as a flat file, and imports it into a financial system. The requirements are "Make the Financial System LOOK LIKE the ERP System for accounts A, B, and C."

Another application pulls data out of the ERP system and populated the data warehouse. A third creates a flat file which is sent over to a trading partner "Make sure the trading partner knows all our gold members, the member ID, and eligibility dates" are the requirements.

Think about it - the requirement is to take one set of black-box data, and import it into another black box. We can test the data file that is created, but the real proof is what the second system accepts -- or rejects.

And, no offense, but for some of these Apps, FIT isn't a very good fit.

First of all, the test databases used are refreshed every three months from production. That means that you either have to find test scenarios from live data (and hope they don't change next time you refresh) or re-enter every scenario in test every three months.

Now, take the trading partner example. The best you can do within your organization is to test the file. The interface might take three hours to run, then you GREP the file for results and examine. You'll have to write custom fixtures to do this, and your programming language isn't supported by FitNesse. Or you could write a fixture that takes a SELECT statement to count the number of rows that are generated by the file, run the interface, and compare.

Of course, a programmer is going to have to write the SELECT statement. Is it a valid acceptance test?

Or you could have the number of rows fixture be approximate - "Between 10,000 and 15,000" - customers could write this, and it guarantees that you didn't blow a join, but not much else.

You could write code that accesses the deep guts of the application, turning it sideways to generate a single member at a time, thus speeding up the acceptance test runs to a few seconds. That's great for the feedback loop, but it's more of a unit test than an acceptance test.

You could suggest I re-write the whole thing to use web services, but that introduces testing challenges of an entirely different kind. To be frank, when I have a problem and people suggest that I re-write the whole thing without recognizing that it would present an entirely different set of challenges, it's a sign to me of naiveté.

I submit that all of these would be a significant investment in time and effort for not a whole lot of value generated.

So, I still want to write customer acceptance tests, but I'm not sure this is the right paradigm to do it. I also have a handful of tricks and techniques I have used over the years to make this easier. I will reveal them in a future post, but in the mean time, here's my challenge:

What would you suggest to solve this puzzle?

I should add that I don't think this is a trivial puzzle; at least, more than half of the people of which I ask this give an answer that I believe to be unsatisfactory. Can you do better?