Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Tuesday, November 28, 2006

Testing Computer Software, 3rd Ed

Dr. Cem Kaner has started work on the 3rd edition of his popular sofware testing book by posting a short article here.

If you'd like a short introduction to some deeper issues in software testing, you could read my blog for six months, or, well, check out his post. Seriously, it's good.

A couple of my favorite quotes:

Testers should not try to design all tests for reuse as regression tests. After they’ve been run a few times, a regression suite’s tests have one thing in common: the program has passed them all. In terms of information value, they might have offered new data and insights long ago, but now they’re just a bunch of tired old tests in a convenient-to-reuse heap. Sometimes (think of build verification testing), it’s useful to have a cheap heap of reusable tests. But we need other tests that help us understand the design, assess the implications of a weakness, or explore an issue by machine that would be much harder to explore by hand. These often provide their value the first time they are run—reusability is irrelevant and should not influence the design or decision to develop these tests.

Forcing people to do make a tests regression-runnable, is, in my experience, often a thinly-vield excuse for the cult of "document everything." At the same time, if you can decrease the cost of documentation, you can run more testss - and more tests means better software - and the best CYA is to not have the bug ship to the customers. For years, I have kept hearing things like "If you don't document it, it didn't happen"; my typical reply is "If you do document it, and stick it in a drawer, you just wasted your time."

Dr. Kaner also wrote:

The focus of system testing should shift to reflect the strengths of programmers’ tests. span>
Many testing books (including TCS 2) treat domain testing (boundary / equivalence analysis) as the primary system testing technique. To the extent that it teaches us to do risk-optimized stratified sampling whenever we deal with a large space of tests, domain testing offers powerful guidance. But the specific technique—checking single variables and combinations at their edge values—is often handled well in unit and low-level integration tests. These are much more efficient than system tests. If the programmers are actually testing this way, then system testers should focus on other risks and other techniques. When other people are doing an honest and serious job of testing in their way, a system test group so jealous of its independence that it refuses to consider what has been done by others is bound to waste time repeating simple tests and thereby miss opportunities to try more complex tests focused on harder-to-assess risks.


We allmost got into this a few weeks ago in Indiana, but I didn't take the bait. I probably should have; we could have learned something from each other. I would put it slightly differently:

If your developers are doing automated tests, and you find that a test technique (such as bounds testing) isn't finding any bugs, it's because the devs are covering it. So you should probably shift your focus away from a technique that isn't yielding results to focus on things that provide a better return. Maybe not entirely, but a shift is called for.

Come to think of it, if you are using any test technique and not getting results, it's probably because the software seems to work in that way, so try something else.

Fifteen years ago, as a cadet in the Civil Air Patrol, I sat through a class where they explained this principle. In a missing aircraft search when you have a rader hit, the probability of discovery decreases the further you get from the rader hit. The Mission Coordinator can calculate the probability of discovery (POD), and when you've searched the close areas enough that the POD is up, you start to believe that the plane isn't close, and so you move the search parties further out.

The application to testing is an exercise for the reader. :-)

In the mean time, check out Cem's post ... more later.

No comments: