Charlie Audritsh asked:
"I take you to mean what I'd refer to as a regression test. A test of mostly the old functionality that maybe did not change much.
So yeah. I have to admit there's a low likelihood of finding bugs with this. What nags at me though about this idea is that I still feel like regression tests are pretty important. We want them *not* to find bugs after all. We want to know we did not break anything inadvertently, indirectly, by what we did change."
I believe our emphasis on regression testing is mostly historical accident, and here's why:
Big Projects (and LANs) became popular about ten years before version control became popular. So, at the beginning of my career, when dinosaurs still roamed the earth, while I was working on a bug fix, another developer might be working on new functionality. If I saved my fix before she saves her changes, then she would "step" on my changes and they would be lost.
Thus the software would "regress" - fall back to an earlier state where a bug was re-injected into the code. The longer you delayed integration, the more likely that was to happen.
Today we have version control with automatic merge, automated unit tests, and Continuous Integration Servers. So this huge tar pit of regression just isn't as bad as it used to be. In fact, I distinctly remember reading a paper that showed that about five percent of the bugs introduced in the wild today are "true" regression bugs - bugs re-introduced by a mistake.
Of course, there's the second type of regression, which is to make sure that everything that worked yesterday works today. I'm all for using every tool at our disposal to ensure that, but I find that automated customer acceptance tests (FITnesse with fixtures) are very expensive in terms of set-up time, yet don't offer much value. Sure, the customer can walk by at any time and press a button and see a green light. Cool. But in terms of finding and fixing bugs?
If there is never enough time to do all the testing we would like to on projects, then I believe we are obligated to do the testing that has the most value for the effort involved.
And when it comes to bugs, I believe this is critical, investigative work done by a human. Assuming the devs have good unit tests in place, re-running tests from last week (or creating a framework so you can) probably has a lot less value than critical investigation right now, in the moment.
... but don't get me wrong. For example, for performance testing, you need to use a tool, and type some stuff in, and then evaluate the results. I'm saying that writing code on top of that, to have a single button, then evaluate the results for you and green or redbar - might not be the best use of your time.
I am very much in favor of model driven testing, but with every release, the model needs to change to test the new functionality.
"Set it and Forget it" customer acceptance testing?
Not So Much ...
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com