Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Monday, August 11, 2008


There's been an interesting little discussion on the Test-Driven-Development list about redundant tests that goes something like this:

A) I have a unit test called "UpdatesDatabase" in my database-connector object that tests to make sure I can update the database.

B) I have the same test in my "Model"; all the model does is call the connector object, but I have a test for it.

C) I have the same test in my "Controller"

D) I have the same test in my GUI/View

E) My customer does the same thing as an acceptance test

It's not one test, it's dozens of tests in each layer, each repeated five times. Isn't this redundant?

My short answer is both - yes, it's redundant, and, at the same time, that is not necessarily bad.

In any large, working system, at any one time, at least one system is failing, and another system is compensating(*).

If this was not true, we would not need tests, right?

So, first off, if your automated tests get to the point where they could be automatically generated by a code-generator, you aren't thinking, and risk spending a lot of time on things that might not have much value. If you've got more than two copies of essentially the same test, you may be able to eliminate some of those tests by making a pointed decision about risk.

At the same time, If you get feedback like "It just HAS TO WORK" from management, well, recognize that systems fail, and the way to prevent failure is through redundancy and failover. One way to do that is through "redundant" tests at multiple levels; another is, yes, an independent test group.


UPDATE: Yes, it's a complex architecture, probably win32, not web, and it could certainly be a heckofalot tighter. I suggest we keep that as a separate discussion.

(*) -John Gall discusses this in "Systemantics" , if you want the Cliff's notes you can download an MP3 of Peter Coffee discussing this at Agile 2006.


Luke Closs said...

It's not necessarily bad, but is that an intentional test strategy? IOW, if you were designing tests for a product, would you design in 5 separate layers to be tested? Aren't extensively layered tests wasteful?

If completely black-box testing (say with Selenium) was super fast, would we be content for testing to be sufficient? Is it the case that we have layered tests now primarily because of run-time cost of those tests?

Matthew said...

luke -

I'm relatively certain the original dev was writing something in win32/GUI and did not have a test framework like selenium.

In my book, he could probably skip some of those tests. With selenium, he might be able to get down to just two: "Unit", or dev-facing, and Sellenium, or Customer facing.

It might be accurate to say: If you need more than two layers, you might want to look long and hard at your architecture stack.