My latest post to the Agile-Testing Discussion Group:
--- In agile-testing@yahoogroups.com, Steven Gordon
Again, that is not to say the exact same activities you recommend should not be done, just that they should be viewed as primarily proactive (helping determine future completion criteria) instead of primarily reactive (retroactively changing the current completion criteria).
In some of the circles I run in, we take the idea that you should do 100% Acceptance Test Driven Development - that is - to predict all tests up front, and if you didn't predict it, it's not a defect, it's a "discrepancy between where we actually are and where we thought we would be" as Big Testing Up Front (BTUF).
Personally, I find that testing to get all the tests right up-front is a little like trying to predict all your questions right up front in the game "20 questions"; that the process of exploring the software often helps uncover issues we would not have had otherwise(1, 2).
Now, as for changing completion criteria, I agree - but the majority of defects I find in exploratory testing are of the type where the developer is told of the issue and says "Yeah, you're right, it shouldn't do that." That is to say, the teams shared mental model of how the software works was in alignment. I call that a defect, or, occasionally, a "Bug."
I understand the Extreme Programming literature had a negative reaction to big design up front. Something about how not all elements of the design could be predicted, and the design itself evolved along with the product, or something.
Can you see how BTUF looks from the testing side of the fence? (I am speaking of acceptance testing; I've see BTUF work well, often, for "pure" developer-testing.)
regards,
--heusser
(1) - This is not my original idea; see "A Practitioner's Guide to Software Test
Design", by Lee Copeland, Pg. 203.
(2) My current team has shipped working software in something like 30 out of the past 33 2-week iterations. For blocker bugs, we do not enjoy the luxury of saying "the story is done, if you want to get that fixed, make it a story for the next iteration."
UPDATE:
I am not suggesting that acceptance tests are bad. I think they are a great communication tool, and that they can streamline the dev/test process. I'm only suggesting that we set our expectations properly for acceptance tests. I focus on acceptance tests that are valuable over comprehensive. Even James Shore, "Mr. Agile Agile Agile", seems to have come around to this idea - see his misuse of FIT #5.
6 comments:
> …the idea that you should do 100% TDD - that is - to predict all tests up front…
I might be misunderstanding you, but surely TDD is nothing to do with "predicting all the tests up front"? Certainly not if you follow Uncle Bob's rules.
Uncle Bob's rules are for Development; I'm speaking of acceptance testing here.
Having worked with UncleBob a little bit on his "Clean Code" book, I'm confident he doesn't believe in BTUF.
I think it's easy to come up with too many acceptance tests that end up "requiring" things that should be allowed to change in the name of improved design (via TDD, or other means).
But there are some things the customer needs that are immutable. Is this debatable? Even if they really don't need it, if they can't be talked out of it, it's still their decision as product owners. We can write tests for those things, and we should, up front.
I /like/ the idea of pre-defined acceptance tests, then tend to help. My issue is with trying to get 100% right up front.
Agreed. Of course, some tests aren't worth the the time it takes to wire everything up.
How a test should be implemented (xUnit, fixture, UI automation, manual, etc.) must be part of the test creation process.
But even if you don't automate them, you still can't really know all of them. And, as we know, they can change at any time. Stick to the thing that are absolutely needed (wanted). This narrowing can be difficult.
Post a Comment