I've heard this term lately - Risk-Based Testing. The idea, is, essentially, to prioritize your tests by risk, and do the riskiest (and most painful if it fails) things first.
If you think about it, that means finding the tasks that have the highest bang for the buck - and doing them first.
Now isn't that just plain good testing?
Or, to put it a different way - can you think of a form of good testing that does not consider risk?
I brought the question to the twittersphere this morning and got some interesting replies. Ben Simo and Ron Jeffries pointed out that Acceptance Test Driven Development, and some implementations of TDD, often don't address risk.
Is it fair for me to call that "bad testing"?
Well ... maybe. It depends. It's probably time for me to introduce the Bowl of Fruit problem.
Imagine a Bowl of Fruit. It has a lot of things in it. It's got some bananas, some grapes, some oranges. We all like the bowl of fruit.
We got to Fruit conferences. We get up in front of people and talk about the Fruit. We argue a lot.
And, suddenly, I wake up one morning and realize that you are interested in grapes and I prefer bananas.
That is to say - we keep using this word 'test', but we get different value from it.
Some people value testing as a form of risk management - as an investigative activity to enable decision makers. Others are more interested in using tests for a different purpose.
For example, Acceptance-Test-Driven-Development folks might be more interested in exploring and communicating requirements than they are in critical investigation. Developers using TDD might be more interested in enabling refactoring or to help explore the design or API of the software.
In both those cases, the person is talking about 'testing' but not particularly excited in risk management. Oh, they might be interested in risk management, and appreciate it as a side-effect, but it's not on the top of the stack. They are interested in the grapes, not the oranges.
One way to tell this is my the language used, as inevitably you'll hear something like "... and it's not just testing, you also get (benefit x)."
Nothing's wrong with that, except perhaps using "just" as a pejorative, which minimizes it's impact. I, personally, am interested in "just" testing - testing for it's own sake - as a part of the value proposition of delivering working software, which is the super-goal. (Or making money, having a fulfilled life, and other meta-goals.)
But when we focus on other attributes of the bowl-of-fruit, we shouldn't be surprised that risk-management isn't covered well. So, you might say that aspect of software testing isn't covered well - and that aspect (the one I care the most about) - is done poorly.
My take-aways:
1) One thing I think the "risk based testing" movement /has/ done is move the conversation toward making explicit and conscious trade offs about risk, instead of doing them implicitly. Another is to provide tools to people who might not otherwise have them. In that, I think it's a good thing.
2) Instead of arguing about approaches or words, we can instead start by focusing on the goals of testing. If someone has different goals than I do - well - of course they'll come up with a different testing strategy. And that might be just fine.
Note 1: Thanks to my colleague and friend Sean McMillan, who introduced me to Bowl of Fruit problem with regards to software requirements. The original idea, as far as I can tell, came from Collaborative Game Design Theory.
Note 2: Please don't mis-read this to mean "Heusser thinks ATDD or TDD are bad testing." When, as a developer, I've used TDD, a large portion of what I used it for was risk management. As a tester or PM, when we used ATDD, a large portion of what we used it for was risk management. But then again, I am actively interested in risk management. Some people have ... less interest.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Friday, June 12, 2009
Subscribe to:
Post Comments (Atom)
7 comments:
Great post on explaining how we can both be testing but think differnt things are riskier.
Risk management is very important in all projects. People need to understand/agree on what the risks are before the bowl of fruit rots away.
thanks, Jay.
Can you think of a project where risk management would not be important?
I can think of a few. In most of the cases, the project just isn't that important to the business and doesn't offer much value.
Example: Take website X and customize it in these specific ways for customer Y. In many cases, those projects are begging to be automated or outsources.
In other words, sure, there are projects where risk management isn't important ... and those are projects that, in general, I have little interest in working on as a tester! :-)
This remind me of Object Mentor's "X Tests are not X Tests".
Testing is really a large field, but people use the same term to talk about different things...
I like it when someone says something to break up the herding mentality (or is it jumping on the bandwagon?) ;-)
The 2nd take-away more or less summed it up: Different problems are tackled in different ways -> test strategies & approaches. Some involve risk-based testing and some don't.
I presume the discussion around risk-based testing has been made by the people who are interested or are using it.
I've performed accreditation testing - where risk-based is not used - but I can still be interested in risk-based at the same time -> the next problem/project around the corner may have a use for it.
All test approaches have their own benefits and drawbacks - so it really boils down to assessing what's right for the job in hand.
But, even if it's an "explicit" risk-based approach or not it always makes sense to have some of the biggest-bang-for-the-buck tests up front - as you said.
Ah yes, common-sense-based-testing! Although, maybe not listed in wikipedia yet.
Hello Matthew,
Great metaphor you used. Risk Based Testing can of use when using it wisely. What I often see is that organizations insist to identify risks and test to give information what the chance of occurrence would be if the risk is happening.
There is always a chance that risks are defined on different levels. For example: I'm speaking about green grapes while you expected the black ones. Only the level it was explained is in general terms: grapes.
If I focus on the green ones and perform tests to identify the shape I would suggest I'm testing the wrong fruit based on different assumptions. The outcome might be true as it is tested, only it is not desired and therefore not much of use.
If the wrong risks are tested or the risks are tested wrong, what is the value of risk based testing. You might consider that non-risk based testing is even better as your focus is not directed by those wrong defined risks.
I know a possible solution would be Risk and Requirement Based testing. You narrow the risk definition by validating them against the requirements.
Unfortunately, requirements are not that clear also all the time. You have the same risk of picking up the fruit differently.
I think the danger of calling Risk Based Testing is approaching the organization and system based on best practices. I would rather suggest checking what the context is before you choose the approach instead of starting setting up the testing organization based on the risk based testing approach.
Matthew,
Great post. I like not only what you said but how well you expressed it. Your blog url is now in my "favorites" folder.
- Justin Hunter
(Hexawise Founder)
Nice take on the Risk, 'Risk-based' and 'Risk-based testing'.
The Bowl of Fruit really helped to understand the problems and perspective of Risk Based testing.
Thanks,
Ajay Balamurugadas
Post a Comment