Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, February 12, 2007

Testing Lessons from Mathematics - Intro

You may not know it, but there are a whole bunch of things that are lumped into Math, somewhat haphazardly. Statistics, for example, could fit well in the business school or the psychology department. Game Theory is another one that mystified me; in my mind, it fit better in the economics department. John Nash, who co-invented game theory and was the subject of "A Beautiful Mind", actually earned the nobel prize in Economics.

None of that makes much sense unless you describe Math in this way: "Mathematics is what mathematicians do." Then, pretty much all of the what I saw as a Math student at Salisbury University seems to mince together in a vaguely holistic way. From Dr. Austin's Number Theory to Dr Shannon's Scientific Computing, they all sort of ... fit.

Like Math, software testing is hard to define on the spectrum. For example, you can't go out and earn a Bachelor's Degree in the subject. If you could, what department would you earn it in? Computer Science? Business? Prob and Stats? As I see it, there is little agreement on where to place software testing, or what conceptual framework to put it in. Here are a few of the worldviews I run into:

Software Testing is a branch of Computer Science. Perhaps typified by Boris Beizer, this view of testing involves taking the requirements and developing graphs of flow controls, then creating test cases for each flow. I have heard this time and time again in "Intro to testing courses", but I find it nearly impossible to actually test this way. Beyond a simple dialog with a half-dozen controls, the finite state machine is just too big. Then, again, this might work just fine for testing embedded systems.

Software Testing is a branch of Cognitive Psychology. According to Wikipedia, Cognitive Psychology is "The school of psychology that examines internal mental processes such as problem solving, memory, and language." In this view, software testing is actually a very complex problem - You generally have an infinite number of combinations to test in a limited period of time. There are a huge number of challenges in testing, but finding the important things to test and assessing the status of the software under test are two of the larger ones. Believe it or not, Dr. Cem Kaner, author of several books on software testing and a leader in the context-driven school actually has his PhD in Experimental Psychology.

There's more to it than that, of course. The Psychology department of your local college also studies design of experiments, culture, how tools are used, how people learn ... all of which can apply to software testing.

Software Testing is applied philosophy. Testing asks questions like "Do you believe the software works? To what extent do you believe it? What does 'works' mean anyway?" Epistemology is philosophy of belief and knowledge. James Bach calls his company, Satisfice, "Epistemology for the rest of us."

Software Testing is a branch of scientific management. While I have never heard anyone say this out loud, there is a significant group of testers commonly referred to as the "factory school." Under this view, untested software is a work product that needs to go through a quality control step, but re-inspected and re-worked until found fit for use. The factory analogy suggests that testing techniques, applied correctly, can be predictable as long as they are applied in a consistent and standardized way.

Brett Pettichord, along with others, have divided these into at least Four Schools of Software Testing. The ideas above align roughly with these schools. The Cognitive and Philosophy groups probably combine to form the context-driven school. The computer science view aligns with the analytical school. Scientific Management is a drop-in for the factory school. I don't have a 'view' that fits with the quality school that really adds anything, so I've left it out.

This begs the question: Ok, Matt, what is software testing a branch of?

Actually, I thought of several short, psuedo-insightful answers to this question. Here's are three:

Software Testing is a branch of fun-ness. This is the wierdo school of software testing. It derives from toddlers, who often enjoy breaking things more than they enjoy building them.

Software Testing is actually a branch of physics. Believe it or not, wikipedia defines physics as "the science concerned with the discovery and understanding of the fundamental laws which govern matter, energy, space, and time. Physics deals with the elementary constituents of the universe and their interactions, as well as the analysis of systems best understood in terms of these fundamental principles." That actually seems to work. I specifically remember my high school physics teacher, Mr. Johnson, saying something like "The great thing about physics is not that you learn how to calculate the speed of a bullet in flight; it's that you teach your brain to solve complex problems." When I think about how we solved those problems: Creating a model, taking the model and developing a strategy, and applying that strategy and looking for feedback, it actually makes a little bit of sense.

Software Testing is applied systems thinking. This one actually seems to work, except that systems thinking isn't really a "thing", it is a way to solve problems in disciplines. This leaves us with this definition...


Software Testing is what software testers do

On first blush, I find this definition amazingly unsatisfying and vapid. I hate it, and would be careful to only use at the end of the conversation - after we have covered a lot of painful ground and built some shared understanding. I would not use it to some fake view of agreement on what software testing is; instead, I would use it to point out our lack of agreement. Here's my tiny little insight to add to it:

Because the world of testing is so immature, we can pull our ideas from anywhere. The value of our idea, then, is not judged by where it came from, but what it adds to our knowledge and ability.

About ten years ago I completed a traditional programme of instruction in mathematics, with a concentration in computer science. In other words, I am a classically trained mathematician. (I can prove this, because I spelled programme with an "e" at the end. It's more impressive that way.)

Does the world of math provide insight into testing? Maybe. At least, I certainly hope so; that is what this series will try to explore.

2 comments:

Anonymous said...

In regards to "Software Testing is a branch of fun-ness", check out "Play as exploratory learning" by Mary Reilly. You wont be disappointed.

http://www.amazon.com/gp/product/0803908458/qid=1139056765/sr=1-1/ref=sr_1_1/104-4119075-7860749?s=books&v=glance&n=283155

Anonymous said...

What testers do depends on what they have been given to test. When someone is given a small piece of code to test, the objective is normally to find any defects that remain after other verification processes have been applied. The most effective of these "other processes" are requirements, design and code inspections. The reason for spending a significant effort at this level of testing is that it is possible to understand what the small piece of code is supposed to do and to do a reasonable job of testing it. And if you build a system, any system, out of perfect parts, then there is a chance of building a perfect system. However, if you build the system out of defective parts you will always end up with a defective system.

Once a system has been assembled, unless it is quite small, there is no economically viable level of testing that will have a significant impact on its reliability. Once it is buggy it will always be buggy. So system level testing has to serve some other function than the discovery of defects to make the software better. The most valuable information that can be obtained from this level of testing is whether or not the system is buggy. If it is, it has to go back to the developers. If it isn't, it can be shipped. Once testing has provided that information, any additional system level testing is wasted effort.

What is testing a branch of? Its a branch of gambling.

The lure of testing is that it seems inuitively that it should work. After all, if defects are found and repaired, then isn't the software better for it? The answer is yes it is. The ocean is less deep if I take a bucket and remove some water, just not by a lot. Most organizations don't have any way of knowing how much better the software gets for each defect removed.

Given that defects are present, the tester doesn't know where they are or how severe they are. Defects are distributed at random. The tester designs some test cases that he thinks will give good coverage, or serve whatever goal he has in mind, but even if these test cases reveal some problems, it is likely that there are more. The reliability of a system does not depend on how many defects were found and removed, but on how many are left.

A tester in this situation is like a person playing roulette. Most people lose but once in a while someone wins. Every gambler that has played roulette dreams of some sort of system that will make him a winner. This is what testers hope for; some system or technique that will let them find most or all of the defects. But its a dream. And the people who are exposed to the risks that testers take are not the testers but the users.

Perhaps we should start a 12 step program called Testers Anonymous.