My latest post to the Agile-Testing Discussion Group:
--- In firstname.lastname@example.org, Steven Gordon
Again, that is not to say the exact same activities you recommend should not be done, just that they should be viewed as primarily proactive (helping determine future completion criteria) instead of primarily reactive (retroactively changing the current completion criteria).
In some of the circles I run in, we take the idea that you should do 100% Acceptance Test Driven Development - that is - to predict all tests up front, and if you didn't predict it, it's not a defect, it's a "discrepancy between where we actually are and where we thought we would be" as Big Testing Up Front (BTUF).
Personally, I find that testing to get all the tests right up-front is a little like trying to predict all your questions right up front in the game "20 questions"; that the process of exploring the software often helps uncover issues we would not have had otherwise(1, 2).
Now, as for changing completion criteria, I agree - but the majority of defects I find in exploratory testing are of the type where the developer is told of the issue and says "Yeah, you're right, it shouldn't do that." That is to say, the teams shared mental model of how the software works was in alignment. I call that a defect, or, occasionally, a "Bug."
I understand the Extreme Programming literature had a negative reaction to big design up front. Something about how not all elements of the design could be predicted, and the design itself evolved along with the product, or something.
Can you see how BTUF looks from the testing side of the fence? (I am speaking of acceptance testing; I've see BTUF work well, often, for "pure" developer-testing.)
(1) - This is not my original idea; see "A Practitioner's Guide to Software Test
Design", by Lee Copeland, Pg. 203.
(2) My current team has shipped working software in something like 30 out of the past 33 2-week iterations. For blocker bugs, we do not enjoy the luxury of saying "the story is done, if you want to get that fixed, make it a story for the next iteration."
I am not suggesting that acceptance tests are bad. I think they are a great communication tool, and that they can streamline the dev/test process. I'm only suggesting that we set our expectations properly for acceptance tests. I focus on acceptance tests that are valuable over comprehensive. Even James Shore, "Mr. Agile Agile Agile", seems to have come around to this idea - see his misuse of FIT #5.