Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, November 02, 2006

The "Correct" View of Software Testing

Just after a wonderful birthday lunch at Logan's, I was enjoying a leisurely drive back to the office today when the subject of software testing came up. (Go figure)

Dave comes from an extreme programming background, and was talking about automated unit tests. He admitted that automated unit tests are generally not sufficient to effectively test a product, but also suggested that they are a great place to start.

MaryAnn and Kristy came from more of a user-and-use-case driven perspective. They suggested documenting how the customers will use the software, and testing to verify the standard results make sense - recognizing that complete testing is impossible. (No, really, it is.)

I advocated rapid software testing and exploratory testing.

It was great fun, but after a few minutes I realized that we weren't learning a whole lot from each other. We each had interesting things to say, but we had made up our minds and weren't changing them - at least not much.

Enter the epiphany.

Let's step back for a moment. Developer-facing tests are a way to find information about the product under development. A very-rapid feedback way, they tell the developer if the software does what he or she expects.

Sometimes, what the developer expects is different than what the customer expects. This is a different kind of defect. The use-case driven testing is a way of testing that is often better at uncovering this kind of defect than developer-facing tests.

Negative testing, "what happens if I make a typo", quick tests, and way-out-there yet-legitimate-value testing ... those things are often best done through exploratory and rapid methods.

All three of those are ways to learn about the product under test. All three of them have strengths and weaknesses. How much of each one that I use will vary based on the product, the customer, the projects, the team, and so on. I reserve the right to use more or less of those types of testing (or other methods like security or performance testing) based on what makes sense in the moment.

It's not a question of yes or no, "the right" way to view testing verses "the wrong" way to do testing. As a testing craftsperson (artist?) I have a palette to choose from, and I mix primary colors to make more interesting ones.

This can lead me to some interesting conclusions, for example, that differentiating white-and-black box testing isn't always a great idea.

Just like the four points of view in our discussion after lunch, diversity in test strategy can be helpful. When presented with a different point of view about testing, I am often tempted to shout it down. Next time, I'll try harder to listen.

1 comment:

Anonymous said...

I think you are right on with the idea that many different techniques should be intelligently applied to testing. James Bach calls it diverse half measures on page 8 of that document. Michael Bolton listed a number of interesting half measures on the agile-testing mailing list. Now I need to learn more of those methods, and make a more detailed study of which methods are best applied to which situations