Yesterday I started reading Agile Testing: A Practical Guide for Testers and Agile Teams by Crispin and Gregory. Oh, it's a good book. I think the authors deserve serious applause and credit. They went out into the field and asked people what really works - and waited for multiple responses before weighing in. They had an extended review process, and they had hundreds of references. I'm impressed, and I recommend it.
But something happened when I started thumbing through those hundreds of references - I noticed that of 200 or so references, all but one were dated after the agile manifesto, which was penned in 2001. The sole exception was a paper about session based test management, written the year before.
Don't get me wrong; telling the history of software testing is simply not the Goal of "Agile Testing" Book - it is about Agile Testing, something essentially born in 2001, and it does a good job of that.
Yet the Agile Manifesto says that we are "uncovering" new ways of developing software by doing it and helping others do it. To do that, shoudn't we have a balanced sense of history?
Example: Not too many years ago, I was a young student of mathematics and took a 400-level course called "History of Mathematics." In fact, I'm pleased to see it's still on the books.
The course gave us a sense of the history of where math came from - from ol' Pythagoras through NP-Complete.
More importantly, /I/ learned a lot about how mathematicians think; how they model and solve problems - from direct proof, to proof by induction, to reducto ad absurdum. Reducto ad absurdum, for example, is really interesting: To prove something is true, assume it's false, and keep going until you find a contradiction.
But I won't bore you with math proofs; this is a testing blog.
So, if you wanted to read an article or take a course like that for software testing: A history of the ideas in the field --- where would you go?
...
time passes ...
...
Chirp, Chirp.
...
Oh, sorry, that's a cricket.
Oh, perhaps, if you were an academic, you might find a survey of the testing literature on CiteSeer.
If by some Miracle, you find one written in plain English, drop me an email. In the mean time, I'm not holding my breath. I've been thinking of developing a paper, talk, lightning talk, article, series of blog posts ... something about the history of software testing to give the newbie some idea of the ground we are covering, so we don't have to have this same discussion of "should the testers be involved up front" again and again and again.
So, let's see what we have totally and completely off the cuff:
1958+ (Buddy Holly; Elvis)
Jerry Weinberg leads project Mercury Test Team (IBM), first independent test team
1960's (The Beatles; Star Trek)
Computer Programming Fundamentals, by Herbert Leeds and Jerry Weinberg, describes software testing
PL/1 programming: A manual of style, Weinberg publishes the triangle problem for the first time
1970's - (Lynrd Skynard, Thick Ties)
Time Sharing Systems. Birth of "It works on my machine."
The Art of Software Testing, Glenford Myers
- Equivalence Classes, Boundaries, Error Guessing, Cause/Effect Graphing (It's history, good and bad) - Functional to Unit Testing
"Managing the development of Large Software Systems", Dr. Winston Royce.
Cyclometric Complexity, Thomas McCabe
1980's - (Swatch Watch)
Software Testing Techniques, Boris Beizer
Black-Box Testing Boris Beizer
- Every software program can be expressed as a directed graph of blah blah blah blah
Test Cases; V-Model System/Integration/Unit
Code, Branch, Input Coverage Metrics
Really interesting stuff in Silicon Valley; tester-as-expert mythos (The Black Team)
"Rethinking Systems Analysis and Design", Jerry Weinberg - Iterative Development and Testing described
Early 1990's - (Thin Ties)
Record/PlayBlack (Winrunner)
Testing Computer Software (Kaner, et al)
Bug Tracking and Version Control become popular
- Version Control changes the popular meaning of regression testing
ASQ-CSQE (and Deming, and Drucker, and Juran)
"Software Testing, a craftman's approach" Jorgensen, Petri Nets and State Transitions
STAR Conferences Start, 1992
Later 1990's - (Friends)
Los Altos Workshops on Software Testing Begin
When should a test be automated, Brian Marick
Test Driven Development - Extreme Programming - Becomes Popular (Beck et al, xUnit)
Customer-Driven Acceptance Tests (The XP Crew)
Exploratory Testing (Bach)
Rapid Software Testing (Bach/Bolton)
Test Automation Snake Oil, James Bach
- The Minefield Problem
Performance Testing / Web Testing Takes off
Heuristics (Bach)
"How to Break Software", James Whitaker, Quick Tests (Also, ESH popularized quick tests about this time)
Early 2000's - (The West Wing)
Session based test management (Bach)
Keyword-Driven (Coined 1999, Graham/Fewster, popularized by Linda Hayes/Worksoft Certify)
Manfiesto For Agile Software Development
Continuous Integration. (More Agile-ness)
FIT/Fitness (Ward Cunningham)
"Lessons Learned In Software Testing" (Kaner/Bach/Pettichord)
WaTIR (Pettichord et al)
ISTQB
Six Sigma
"Key Test Design Techniques", Lee Copeland. Unifies approaches to testing; popularizes the insurance problem as an alternative to the Triangle problem
Model-Driven Testing, Harry Robinson
Software Engineering Metrics, What do they measure and how do we know, Kaner
Later 2000's -
Selenium
Mocks, Stubs and Fakes.
Acceptance-Test Driven Development (Marcano and Hendrickson)
"Agile Testing", Crispin and Gregory
Faught questions the teaching value of the triangle problem
The Balanced Breakfast Strategy
Hopefully, by now, readers know that I am a "Throw stuff up against a wall and see what sticks" kind of person. This list is ugly, probably contains mistakes, and is just a start. It's tentative. It contains a history of the evolution of Agile/Context driven ideas. If your pet paper, book, or idea isn't on this list, is influential, and fits, leave a comment and it may get on the list.
The idea to is to get our comment juices going and start filling in the gaps; to help make a list that is good enough and yet comprehensible.
Then I'll turn every reference into a hyperlink, and it'll be a self-study guide. With a little more work, it might turn into an article or presentation.
What do you say; want to help me out?
UPDATE: For the purposes of this post, I'll consider functional, performance, and test management as "testing"; I may do a future, separate and distinct list for security testing or for regulated testing (FDA, MILSPEC, etc is out of scope.)
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Sunday, March 22, 2009
Subscribe to:
Post Comments (Atom)
21 comments:
Matt,
Some other ideas could be included that have been important learning points in the journey:
1. Boundary Value Analysis: BVA was a big learning tool back in late 80s (at least in my college in India).
2. Quantitative Test and Defect Metrics: Stuff like pre-release defect density, post-release defect density, defect containment metrics, etc. were very much part of the vocabulary in mid 90s in a ISO- CMM-heavy world (out here in Bangalore, at least).
3. Orthogonal Defect Classification (ODC) was a good tool that unfortunately was never utilized well.
4. Not sure if I saw this (might have missed it) but Defect seeding was another good idea to gauge the latent defects in a software and use the probability of residual defects based on how many of these were found by the testing process.
Hope this helps
Tathagat
http://managewell.net
See James Bach's chapter in The Gift of Time. There's a mini history of software testing in there.
A couple of points for you to correct: Jerry dates his involvement with the Mercury Project to 1958.
The first serious writing of computer software testing which we're aware (that is, there may well be earlier writing on the subject) is in Computer Programming Fundamentals, by Herbert Leeds and Jerry Weinberg. The first edition was published in 1961. I have a copy of the second edition, from 1970.
The Computer Program Test Methods Symposium was held in Chapel Hill, NC, June 21-23, 1972. About 200 people attended. There is a brief overview of the Symposium in the ACM archives; waste neither time nor money on that. Instead, see the book by Bill Hetzel, Program Test Methods.
The omission of reference to testing history in Agile Testing is one thing. The complete omission of a bibliography in How We Test Software at Microsoft is quite another.
Dunno how much I'd be able to help out but it's something I'm really interested in learning more about
I was going to post a lot of what Michael said. Then I wasn't going to post at all.
Then (for at least the 2nd time) how diasppointed Michael is that HWTSAM doesn't have a bibliography. The strategy with the book was to reference books in line as footnotes. For one chapter (the MBT chapter) we included a recommended reading list. That was the plan - sorry it ruined the book for you Michael.
Anyway - sorry for going off topic, but Michael started it :}.
Oh yeah - you should probably mention something about testing and debugging originally meaning the same thing. Also, Myer's previous book (Software Reliability, 1976) talked quite a bit about testing (it's 4000 miles from me, or I'd add the proper biblographic reference).
Even later 2000s, is Behaviour Driven Development---even though its primary goal isn't what testers consider testing.
This is essentially what industry analyst Neil Ward-Dutton is talking about in his blog today (albeit from a slightly different angle) - Industry's dangerous flirtation with software quality - http://www.ebizq.net/blogs/softwareinfrastructure/2009/03/why_we_need_alm_industrys_dang.php - that, if you ignore the history of software testing and forget what the industry has already learned, we’ll carry on walking in circles…
I think it is a great idea to build some kind of comprehensive reference resource – invaluable for newbies to the industry.
Some items from an indian subcontinent's Software testing standpoint ....
1. Raise of outsourced IT services - software testing was seen as first and the least risky work that can be outsourced... Not sure about first instance of such thing happening ... I believe it picked up around year 2000 and picked up post Y2K years.
2. Outsourced Testing and hence Independent Testing started to make appearences... that apparently has changed the face of software testing in india again... around year 2000. Independent testing was sold on the premise that developers do poor job at testing and there is need of independent "eye" free from biases.
3. Software Metrics (GQM, PSP/TSP and others) have had great role to play in testing in general and outsourced testing in particular.
4. Software Quality Models - CMM/CMMI are often (even today) quoted as "benchmark" models that can be used for testing. Considering testing as a seperate activity as part of SDLC was an important mile stone. It was at this milestone, a dedicated role of tester was concieved.
5. Software models like V model, Waterfall, iterative, RUP and others
6.Business domain/SME role in testing. Asking a tester to prove certain level of compentency in business domain of the application to be tested ... has broadened the areas that tester need to focus on
7. Test process maturity models - TMM, TPI, TMap and others. There is a feeling that someone, somewhere knows how to do testing better. These models take moral/social/cutural high ground of telling "I know how to do testing in the best way, I can assign a maturity level"
8. Proliferation of "best practices" into Testing - courtesy outsourcing.
9. Introducing "General systems theory" to testers. I personally gained from the idea that knowing systems theory helps in improvised thinking hence better testing. James Bach/Michael Bolton, I think can be attributed to this....
(not sure if this qualifies to be in the list ... but it is, in my opinion - a distinct development)
10. Microsoft/Google's way of approaching testing - mainly testing code as opposed to testing everything else around code and code. That was a distinct style
11. Roles like Test architect, SDET have their own implications on history of testing
12. Current economic slow down, forces "testing" service consumers to think approach differently ... may be a new order in testing is taking its shape. Business leaders /CIOs ask - how to cut cost and that when translates to testing ... will drive the testing in a different way.
I am still thinking ... what else can be included in the list ...
Thank's TV, Shrini, and everyone. There's certainly a lot to integrate.
I think you've got a point about the rise of the SDET-role; I think it's important and worth mentioning. As for CMMI, Metrics, PSP/TSP, I don't know how much attention I want to give them. I mean, how much influence do any of those have on testers today? Did they ever? (Metrics in general/GQM is probably worth a nod, I admit.)
Hmm. I think what I'll do is link to the famous Kaner Paper on Software Engineering Metrics. Thanks!
I may also do a parallel track for the rise of ideas in /development/; for example, under MS-DOS you might only enter one field at a time and press < RETURN > after each. Under Windows, you can jump back and forth and click whatever you want to. Also, keeping a program "around" in memory became much more popular. This gave rise to a new kind of memory-leak - and test techniques had to rise to match it. (The "Shoe Test" probably evolved around this time.)
thanks again! Anyone else?
I know I've seen at least one detailed history of the field, perhaps a conference paper. It'll take some digging to find it.
Have you seen Lee Copeland's "Nine Forgettings" talk? (http://www.stickyminds.com/Media/Video/Detail.aspx?webpage=110) His first forgetting is "Forgetting our Beginnings," which bolsters your motivation for finding the history.
Was your PL/I reference intended to be "PL/I Programming: A Manual of Style", the first mainstream mention of the triangle problem? See (http://groups.yahoo.com/group/software-testing/message/5314).
Preceding Myers' testing books was "Program Test Methods", 1973, which is often discounted because it's a collection of conference papers.
The "Black Box Testing" book you cited wasn't nearly as influential as his earlier book "Software Testing Techniques".
Swatch Watch? Are you checking to see if we're paying attention? :-)
I'd be curious to dig up when various online resources appeared, like comp.software.testing and Marick's FAQs.
The first book to mention the concept of keyword-driven testing seems to be Linda Hayes' "The Automated Testing Handbook," though she doesn't call it keyword-driven testing. The term still hasn't caught on well, and I wonder who coined it.
While this is a noble task, to be done 'correctly' (whatever that means) I think you need to put aside your own biases and beliefs. Your original list basically lists the ancestry of the context and agile schools.
Shrini's list does a large part of the work to flush it out though your response, specifically As for CMMI, Metrics, PSP/TSP, I don't know how much attention I want to give them. I mean, how much influence do any of those have on testers today? Did they ever? further worries me about the inclusiveness of the list. I guarantee you CMMi has an effect on testing. And are you saying that Metrics don't have an effect either? Really?
I almost think that an article might not be the right approach for this task. With all the forks, branches, variations, schools, etc., something like the history of unix diagram might make more sense.
Thanks, Adam. It might be more accurate to say that I'm not going for a general list. For example, I'm sure there is work outside my expertise in the areas of FDA and Avionics. So I'll start with my small tree.
The idea is to develop this history incrementally and have it fit on 8.5"x11". Beyond that, I'd include CMMI, metrics, history of sw dev, etc.
Right now I'm thinking something like ESH's Test Heuristics Cheat Sheet. ( http://testobsessed.com/wordpress/wp-content/uploads/2007/02/testheuristicscheatsheetv1.pdf )
If I were smarter I could remember all the things that helped me over the years. But it all blends together in my brain and I can't sort it out. I've always said I'm not an idea person, but good at implementing the ideas of others. I hope people will find our book bibliography helpful. Yes, the books in it are more recent, but some of their authors go way back, just like I do. :-> I like to think Testing XP (2002) was a help too, given that no other agile books of time talked much about testers. Nice work Matt!
This is really food for thought! Thinking back over own career - as baby pgmr, none of us knew anything from industry experts. In SW industry 10+ years w/o knowing there *were* experts. Got into testing by accident, big revalation to go to conference and hear Beizer and Kaner. Brian Marick, Elisabeth Hendrickson and Bret Pettichord huge influences for me. But I don't hide the fact, I don't know all these official techniques and terms.
This is an important idea. Thanks for starting this list.
It is vitally important to understand the history of software testing before software (Thomas Edison, George Washington Gale Ferris Jr., Zog the inventor of the wheel). None of these ideas are brand new; they have a history in engineering and mathematics. It does not need to be extensive, just a nod to its ancestors. I have hoped for some time that someone would touch on this topic.
I also hope someone looks closely at quality in other fields and how we might learn from them. (I would love to see a book on software quality concepts stolen from the fast-food industry for instance. Can we forget "Zen and the Art of Motorcycle Maintenance"?)
I am glad to see that there is a reference to software development in the comments. Any list must recognize that software testing, though possibly limited, began with Lady Ada and continues on with every developer.
It frustrates me to attend an excellent conference on software testing and discover the latest new thing is "lint" for Ruby. We move so fast in this industry that we continually recreate the wheel with no reference to past accomplishments.
One last tangential thought. Perhaps the Agile Testing book did not look at previous work because so much of the early work was thought to be unimportant. There were a lot of really bad books on Software Testing early on - I have a couple. Please feel free to point this newcomer to a list of good software testing books.
I look forward to this list as a relative newcomer to the testing and quality domain.
These might be cogent things to note.
Record/Playback for PC level applications (DOS, OS/2, Windows) began with tools like AutoTester for DOS (late 80's), ATF (for OS/2 circa 1991), SQA Robot & Segue QA/Partner & Winrunner & MS-Test (for Windows circa 1992-1993).
Version Control for PC was the late 80's with PVCS by Polytron and RCS by MKS.
Commercial Defect tracking tools started around 1993 with Defect Control System (DCS) by Software Edge. It later became PVCS Tracker. This tool was based on the internal HP tool (the guys who started the company were ex-HP) and metrics from the Grady / Caswell book on Software Metrics from HP.
Before the STAR conferences there was the ISTQE conferences from the late 80's to mid 90's. These were bigger than STAR for a number of years.
Keyword Driven Testing & Frameworks were an offshoot of Hans Bulwalda's work on 'Action Word' based techniques in the early to mid 90's. And I wouldn't say Linda Hayes popularized the technique as much so as commercialized it with the Certify tool. Before that it was all custom work done with various tools.
And finally regarding outsourcing of Testing (not offshoring) that began in the late 80's to early 90's with companies such as XXCAL in L.A. (they did compatibility testing and staff augmentation), Software Research in San Francisco (consulting and tools) and STLabs (one of the first true test labs that did all services) in Seattle (James Bach's launchpad).
To me one of the most important events was the introduction of the 'GUI Map' in Winrunner. It overcame the maintenance problem of changing Objects in an application and lead to the ability to have re-usable scripts. This helped to make Test Automation more robust and usable. Before that scripts were very fragile and a pain to maintain.
You haven't mentioned Software Reliability Engineering, a "top ten" practice at AT&T. John Musa is the leading light in this area.
A couple of comments on various comments:
comp.software.testing got off the ground in 1991 and I think became official the same year.
DDTS (eventually bought by Pure which was bought by Rational which was bought by IBM) was a commercially available bugtracking system from the late '80s.
GUI test tools go back to the late '80s, even to the extent of oo based frameworks.
The original QAPartners (later Segue) used "objects" that we abstracted from gui elements and allowed for more robust gui testing back in the early 90's (don't know how early, but was there by '93)
And, believe it or not, independent test teams go back to the earliest days as most software in the early days was mission, if not life critical.
Veritas had an extensive set of tools and framework for metrics and programmatic testing, and I believe introduced error seeding as a commercial tool back in like '91
Mention of Oracles and when/who introduced them would also be good.
The cobwebs are slowly parting here. Must be time to shut up.
Software Testing is such a new field that we can hardly make any sense out of its history. It is still going through its era of infancy.
http://softwaretestingjobs4u.blogspot.com/
I helped launch both the original releases of both Watir and Selenium. Both happened in early 2004. I can't remember which was first, but they were only months apart.
This is an interesting list, and it was good of you to think of putting this list together after making your observations about the bibiography of Agile Testing.
Nice article. Your style of writing is good. Thanks for sharing the information.
Nice efforts.Thankyou for sharing.
Post a Comment