In yesterday's post, I introduced the idea of history of mathematics, and how it might be applied to testing. I followed that up with my own list of important publications that had an influence on my thinking.
But let's re-examine that for a moment. I did not create a history of the ideas in software testing - it was more a list of publications and books.
And lots of publications and books don't agree; for example, right now, today, on the Agile-Testing Yahoo Group, there is an argument about the meaning of the word test - primarily led by a PhD.
I'll say it again - We don't have consensus on the meaning of the word "test." Yet any history is going to have to pick a definition and use it. To do that, it will create winners (those who agree with the author) and losers (those who do not.) To do that, the author will have to tacitly insult some people - at least by ignoring them.
And it gets worse. The first book I read on software testing, I would call a "bad book." Oh, it gave me lots of terms like stress testing, functional testing, and load testing, but in terms of giving me ideas to change my behavior - well, it failed miserably. Yet it was a relatively early and popular book on testing in a windowed environment - should it be on the list?
What about Avionics, Embedded Systems, MILSPEC, Medical Systems, Mission and Life Critical Systems? They've developed an entire testing body of knowledge outside of my main expertise. Are they part of the history of testing?
What about The Inspection and Walkthrough literature? What about the "quality as prevention" literature? Is that testing?
How do I separate development ideas (Waterfall, Agile, Mythical Man Month) from testing? Can I? And if I can, isn't there significant information to be gained on how testing adapted to work with new development paradigms? For example, on the dev side, The solution to testing enterprise java beans turned out to be essentially ignoring the bean and creating something called a POJO - a Plain 'Ole Java Object - then having the bean serve as a wrapper around it. Most of those evolutionary stories aren't written down - at least, not in book form. To find out, I'd have to interview, then sift stories.
And to go back to what I said earlier - a list of publications and books isn't really a history, it's a collection of artifacts that are popular at a given time. Figuring out what is really going on would mean going directly back to the community. Crispin and Gregory did it in their Agile Testing book by going to people working in the field today; a real history of testing would mean going back to the people in the field third, forty, or fifty years ago. (Yes, Jerry Weinberg led an independent test team in 1958. How many Jerry Weinbergs will I find?)
Then there are developer-facing test techniques. Behavior Driven Development, for example, is a innovation and idea -- but I don't think it has much to do with what I mean when I say testing. In a history text it should probably merit a mention or footnote - but what do you do when the entire "text" needs to fit on a cheat sheet?
Then you've got the ideas before testing; the western tradition of philosophy, the Chinese, Francis Bacon and the enlightenment, the history of electical engineering, Karl Popper, the history of hardware testing, Zen And The Art of Motorcycle Maintenance. These quickly go beyond my expertise, and yet including a reference or two with some gaping holes could easily be worse than nothing.
Remember that Math Textbook I started with? I don't think it had anything in it after about 1800 - so it had the benefit of a few hundred years of evaluation by professors and academics. It could also benefit from their insight, taken over decades of studying the collaboration of, say, Newton and Leibnetz - to see who should be credited with integration, and who invented the symbol.
I suspect the reason the author stopped at the 1800's was because, it he had to pick winners and losers, he would do it with people who had long passed and been judged, to some extent, by history. To do that, the author did a great deal of scholarly research, built on top of hundreds of years of other research. He evaluated, he studied, and he thought. And, eventually, he invested thousands of hours of time and wrote a book.
Yesterday, I wrote a blog post. I hope you will agree that, to do it "right", it should be a more formal scholarly work - which it was not. I hope you'll also agree that there is more than one definition of "right" - and they conflict. We do not have consensus in our field, and a naive list of history could easily create the wrong impression.
Let's look at this in perspective: This task is huge, and daunting. To imply that it can be done by slapping a blog together is to trivialize the complexity of the task, something we are already willing to do far too often in software testing.
So let's not call it a list of ideas in test history. It is not. It is, instead, my personal view of important works (to me) - more like an F.A.Q. list, with some sense of evolution.
I'm happy with the content, but I believe it needs to be framed more accurately. Which means it needs a more accurate title. How about:
A) On the shoulders of Giants
B) How I got where I am
C) My influences in software testing
D) A fistfull of ideas about software testing
E) Dr. StrangeCode: How, how I learned to stop worrying and love Testing
What do you think? Or do you have a better idea?
More to come.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Tuesday, March 24, 2009
Subscribe to:
Post Comments (Atom)
3 comments:
I think that's a great idea, Matt. The fact that these events/books/technologies were triggers for you personally and that you had lived through them and "walked the talk" would be more meaningful anyway.
A list provided by or augmented by someone else kind of just becomes a list - nothing of relevance to any one person. And considering the length and breadth of our field, you'd inevitably leave something meaningful to some group of folks off the list and they'd view the whole exercise as questionable. No one can really question the validity and integrity of someone's personal experience, however. I believe that would be both valuable and interesting.
Hello Matt,
Funny how mind separated by miles can have a similar idea with different approaches. Recently I posted on my blog What we can learn from history? a similar story just to see if there are moments in time which might have a relation with software testing, or how they influenced testing.
I was triggered by the idea that we are too much looking towards improving testing and doing similar things which are already done in history on other fields of expertise.
regards,
Jeroen
see history of software testing on http://extremesoftwaretesting.com/Info/SoftwareTestingHistory.html
Post a Comment