What's the right mix of exploratory testing, "planned" manual testing, and test automation?
My short answer is "it depends." Now, you don't need to point out that "it depends" is non-helpful - I realize that - and I am going to try to go beyond it.
The reason I say "it depends" is that it depends on the type of problem you have. So one thing I can do to go farther is to list a bunch of possible problems, along with kinds of testing I have seen that fit.
1) The build verification test, or BVT. These are tests you run immediately after every build to make sure the software isn't completely hosed. In a typical windows app, this is - Install / File New / Add some stuff / File Save / Quit / Re Launch / File Open / The Screen should look like THIS. You add more tests over time, but the point is - if these tests fail, it was probably one big massive regression error, any additional testing information is suspect. The test team will work on other projects until the devs can get the BVT to pass.
BVT tests are incredibly boring, and often worth automating.
2) The Fahrenheit-To-Celsius Conversion test. Sure, you could have a human being test a hundred examples manually, but if you have a spreadsheet that you can use as a source of Truth, why not write a program that loops through all values from -10,000 to 10,000, by increments of 0.001, calls the function, does the math in the spreadsheet, and compares the two? Note that this does not provide information about Boundaries, which may be best explored - but it can help you build some confidence about the happy middle.
3) Model-Based Testing. Similar to #2, if the application is simple enough you can create a model of how the software should behave, then write tests to take "random walks" through the software. (Ben Simo has several good blog posts on this topic, including here.)
Despite the appeal of Model-Based Testing, it requires someone with true, deep testing expertise, true development expertise, modeling skill, and, usually, a gui-poking tool. So, despite all it's promise, the actual adoption of model-based testing has been ... less than stellar.
4) Unit Testing. This is as simple as "low-level code to test low-level code" - often much further down in the bowels than a manual tester would want to go, it provides the kind of "safety net" test automation suite makes a developer comfortable refactoring code. And devs need to do this; otherwise, maintenance changes are just sort of hacked onto the end, and, five years later, you've got a big ball of mud.
5) Isolation Problems. If you have a system that requires a stub or simulator to test (embedded system), you may need to want/need to write automated tests for it.
6) Macros. Any time you want to do something multiple times in a row to save yourself typing, you may want to do automation. Let us say, for example, you have a maintenance fix to a simple data extract. The new change will stop pulling X kind of employees from the database, and start pulling Y.
You could run the programs before and after, and manually compare the data.
OR you could use diff.
And, for that matter, you could write a SELECT COUNT(*) query or two from the database and see that:
The total number of new rows = (old rows + Y members - X member);
This is test automation. Notice that you don't have to have a CS degree to do #6 - and most of the others can be done by a tester pairing up with a developer.
So When should I not do automated testing?
1) To find boundaries. As I hinted at above, automated system-level tests are usually pretty bad at finding bounds. So if you've got a lot of boundary errors, by all means, test manually.
2) As Exploratory Testing. Sometimes when we test our goal is investigation; we play "20 questions" with the software. We learn about the software as we go. This kind of "exploratory testing" can be extremely effective; much more effective than mine-field automated testing.
3) When we don't need to repeat the tests. Some things only need to be tested once - Tab Order. Or may be best tested by a human, because they require some kind of complex analysis.
Sapient Testing
Human beings can be sapient; they can think. Humans can look at a screen and say "Something's not right ..." where a bitmap compare would simply fail or pass. Humans can look at a screen and intuit the most likely parts to fail. So the key is to use the technology to help automate the slow, boring, dumb tasks.
For example, at GTAC this year, one group said that it had 1,000 manual tests that ran overnight - recording a screen capture at the end of each. The next morning, a test engineer sits down with his coffee and compares those screen captures to the previous night - using his brain to figure out which differences matter, and which don't. After he finishes those build verification tests (in about a half hour), he can then get down to pushing the software to it's knees with more interactive, exploratory testing.
Now, that morning coffee .. Is that automated testing, or is it manual? My short answer is I don't really care.
These are just ideas, and they focus almost exclusively on the relationship between Dev and Test. I haven't talked about the relationship between Analyst and Test, for example. Don't take this as Gospel. More to come.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Wednesday, December 05, 2007
Subscribe to:
Post Comments (Atom)
4 comments:
Good post Matt. As you know, there always seems to be some debate on "exploratory vs. scripted" - as if they were binary choices and you had to do one or the other.
The reality (as you've described) is that you need to balance the two based on the context of your situation.
Alan (blogs.msdn.com/alanpa)
>>So When should I not do automated testing?
What is your meaning of automated testing? Do you wanted to say "computer assisted testing" or "test execution of some automated tests". In my opinion, testing as a whole can not be automated but few selective portions of it - automated test case/data generation, automated test result comparison or Automated walkthrough of application's GUI - I am sure, you agree that none of these can be qualified as "testing"
Shrini
Read more on Software Testing @ SoftwareTesting-FAQ.blogspot.com
very good article,I am doing a bit on research about software testing and i found also macrotesting www.macrotesting.com to be very good source.
Thanks for your article
cheers
Prem
Post a Comment