Elisabeth Hendrickson recently posted a request for feedback on teaching agile testing.
I liked the question so much that I wrote up a long reply; so long, in fact, that goes better as a blog post than a blog reply.
The tone is even more informal than a typical blog entry for me, but I hope you find it interesting. My goal was to get the ideas down, so I go pretty quickly. If you'd like to hear more detail ("What is the minefield problem in test automation", or "How can I automate a use case", or such) - just comment.
So, without futher ado ...
I am currently finishing up a couple of courses for software developers about testing. I suppose you could call it 'Agile'; I like the term "light-weight methods" or feedback driven or whatever.
I also struggled with how to cover the material. Several of the students (and some of the management) wanted me to dive right into a specific framework. "Teach blahUnit" came the request.
Whatever, dude. You can't automate what you can't do manually.
So yes, we started with equivalence classes, bounds, and traditional requirements-y techniques - stuff you could get from Dr Kaner's BBST course. I also covered scenario testing and use-case driven "acceptance" testing.
Then we did an exercise. I split the class into three groups - the first did entirely document-driven, requirements-based testing. They had to write scripts for the test cases before those were executed, and group one could only execute those tests.
The second group also did scripted, document-driven testing on the same app, but I gave them both the requirements and then demoed the UI. This way, the group could develop the scripted tests with the user interface in mind.
The third group had the requirements and the demo, but did exploratory testing.
After the exercise, I asked every team member to count how many bugs they found - down to root cause. I averaged this per team. I also asked the teams to evaluate how much fun they had - on a scale from 1 to 10, with 1 being "I'd rather have teeth pulled", 5 is "Well, at least I'm getting paid", and 10 is "I want to do this for a living"
Without exception (and I've done this twice now), the first group hated it and found few bugs, the second group found it merely distasteful and found more bugs, and the third group slightly enjoyed it and found the most bugs.
After that, I explain the mine field problem of test automation, the use case-driven view, the ripple effect and the value of test automation to increase confidence in the ripple. Finally, I cover high-volume test automation.
We try to figure out which of the three kinds of test automation make sense where, then explore those with the frameworks that make sense for that team.
Finally, we swing back around to try to form a comprehensive view of exploratory testing, acceptance testing, and test automation.
My take on it is that you can't automate what you can't do manually, and if you automate what you do crappily, you will get bad tests that are cheap to run – but expensive to write.
So I'd make it a two-day class, cover a valid testing worldview that is compatible with agile the first day, and then do all the 'agile' stuff (xUnit, continuous integration, TDD, fitness-y, and so on) on the second day.
I’m on the fence about interaction-based testing. Like a lot of other things (Agile, Lean, TDD) it’s easy to misunderstand, think you are doing it right, but actually waste a lot of time with little benefit while getting code bloat. Specifically, one of the original papers on interaction-based testing had an example that, I believe, sent people writing the wrong kind of tests. Then again, on certain systems, done right, it can increase quality, readability, and time to market. (Just like Agile, Lead, and TDD)
For database systems, I teach stubs (stub out data) not mocks. (fake out behavior)
But that's just me talkin'. In my old age, I find less and less interest in tools and more in skills. It sounds like your class covers agile skills more, so please, Elisabeth, tell us more.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Monday, February 19, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment