At the very end of yesterday's post I mentioned Yo Dawg. Yes, Yo Dawg is important, in that it's a meme.
So what's a meme(*)? It's an idea - a concept that spreads from person to person. "All Your Base Are Belong To Us", LOLCats, and De-Motivators are all memes.
What's interesting about these memes on the intarwebs is not just that people see them and laugh - or that they see them, laugh, and forward to friends. It's that they make /their own copies/ - their own riffs on the idea - and add to a body of knowledge. And, thanks to google, a great deal of them are indexed. Thanks to pagerank, the better ones come to the top.
Ideas in software development are memes. Test-Driven Development is a meme; Context-Driven Testing is a meme. Acceptance Test Driving Development is a meme.
JB Rainsberger isn't just interesting because he's contributed to the software development body of knowledge; it's because he read an XP book, never met the original authors, yet ran with the idea and helped popularize the idea - creating an entire series of events called "XP Day" all over the world. JB was infected by a meme - and took it on enough to make it his own.
Seth Godin recently spoke on Memes at the Business Of Software Conference:
---> For example, the canonical meme from the 20th century was sliced bread. For someone to use the phrase "... the best thing since sliced bread", implies that sliced bread must be pretty good to start with, no?
Here's one item from Godin's message: Some memes are destined to win, regardless of whether or not they work. Test-Automation, for example, is very appealing to developers, because automation is what they do. It's no surprise, when devs look at testing as a computer science problem, Automation is the first thing to come to mind. So we have generation after generation talking about test automation as the be all and end all of the test process, without ever having actually studied the very human, cognitive and communication process of software testing, nor having done any research on failure modes of software defects.
Thus, after the wild (and deserved) success of test-driven development, we have the acceptance-test-driven-development meme creating immediate success with much less tangible evidence.
This should be no great surprise; for twenty years Westerners have been conditioned to two similar memes - that housing prices always go up, and so does the stock market. These have near-universal appeal. Somewhere out there, right now, an investment counselor is suggesting a couple put away 20% of their income in stocks as a retirement nest egg, and a real estate broker is suggesting another couple purchase the biggest home they can afford, because, after all, salaries go up over time, right?
I believe that the communities I belong to - including the folks who read this blog - have ways to test software that are significantly better than the status quo, and we have ways to communicate them and techniques to teach them. Yet if our testing ideas are memes, we need to think about ways to package and present them to win. I believe research and experience can /help/, but often humans don't make decision rationally.
I'm not looking for labels. Agile is a label; anyone can claim it. I want memes we can grab, embrace, make our own, and share. So how can we connect our ideas to make them memes that are viral (or perhaps, "more viral")? This, I believe, is a conversation we should be having.
I do not claim to be a master of memes, except perhaps of the kind "I'm on a boat." "The Boutique Tester" is probably my most recent idea with traction. (Too bad I have no free billable hours.)
What are your ideas, and what do you think?
--heusser
UPDATE: Do you know people who quote "The Holy Grail" from Monty Python for not apparent reason? Maybe you're one of them? /That/ is a meme
(*) - The idea was popularized, and poossibly coined, by a british gentleman named
Richard Dawkins in his book The Selfish Gene.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Friday, July 31, 2009
Thursday, July 30, 2009
Let's overflow the stack
I've previously mentioned StackOverFlow.com - a website with answers to technology questions. It is a joint venture of Jeff Attwood and Joel Spolsky; two of the most popular bloggers on software development. To promote the site, the two also have a podcast where they discuss current trends in development and issues with the site.
They also answer questions; to get your questions answered, you can call in a phone number: 646-826-3879. No emails, you have to actually pick up the phone, they record your voice and put it into the podcast directly.
What does this have to do with testing?
Adam Goucher just put out a post suggesting that we get some coverage of testing on the Stack Overflow podcast by, well, calling in and asking for it.
Adam has already done his bit, so I called in and did mine:
Hello. This is Matt Heusser and I'm a tester in West Michigan. Historically, Joel has been a proponent of a 'tester' role and a little suspicious of Test Driven Development. As stackoverflow is a browser-based app with user-created content that supports a half-dozen browser variants, i'd like to ask you to describe your strategy for customer-facing tests - and maybe tell us about a few lessons you've learned along the way. Thanks.
Care to join us?
UPDATE: Yo Dawg, I herd you like testing, so I put testing in your podcast so you can test while you podcast! ( Explanation here )
They also answer questions; to get your questions answered, you can call in a phone number: 646-826-3879. No emails, you have to actually pick up the phone, they record your voice and put it into the podcast directly.
What does this have to do with testing?
Adam Goucher just put out a post suggesting that we get some coverage of testing on the Stack Overflow podcast by, well, calling in and asking for it.
Adam has already done his bit, so I called in and did mine:
Hello. This is Matt Heusser and I'm a tester in West Michigan. Historically, Joel has been a proponent of a 'tester' role and a little suspicious of Test Driven Development. As stackoverflow is a browser-based app with user-created content that supports a half-dozen browser variants, i'd like to ask you to describe your strategy for customer-facing tests - and maybe tell us about a few lessons you've learned along the way. Thanks.
Care to join us?
UPDATE: Yo Dawg, I herd you like testing, so I put testing in your podcast so you can test while you podcast! ( Explanation here )
Tuesday, July 28, 2009
Beautiful Testing - III
What, you haven't purchased an advance copy of Beautiful Testing yet?
I can't blame you. How do you know if it's going to be any good?
Well, one way is to read the writing of the co-authors and see if you find it valuable. I will introduce a few here ...
Adam Goucher has a nice blog post introduction to each chapter
John Cook is writing about testing a random number generator, and also has an interesting blog.
Lisa Crispin, yes, co-author of the Agile Testing Book, has her own blog.
And there's Scott Barber, plus Chris McMahon and Also Tim Reilly of the Mozilla foundation.
I hope getting the blogs for free helps you make up your mind if the book is a good investment of your time and money. Me, I've already invested a great deal of time in it, so here's hoping ...
I can't blame you. How do you know if it's going to be any good?
Well, one way is to read the writing of the co-authors and see if you find it valuable. I will introduce a few here ...
Adam Goucher has a nice blog post introduction to each chapter
John Cook is writing about testing a random number generator, and also has an interesting blog.
Lisa Crispin, yes, co-author of the Agile Testing Book, has her own blog.
And there's Scott Barber, plus Chris McMahon and Also Tim Reilly of the Mozilla foundation.
I hope getting the blogs for free helps you make up your mind if the book is a good investment of your time and money. Me, I've already invested a great deal of time in it, so here's hoping ...
Monday, July 27, 2009
Of "Jelled" Teams
I've been at Socialtext for about a year and a half now, and I just realized that so has everyone else on the engineering team; our two short-timers are Jeremy and Audrey, who are just coming up on one-year anniversaries.
And we actually know each other; we can give each other a hard time and actually debate ideas on merit instead of working hard to appease each other. I just read a recent note on the 37Signals Blog to that effect and it resonated with me.
And yet, given a random book on methodology or software management, you are unlikely to find anything on longevity and teamwork besides, if you are lucky, a few cliched team building exercises or perhaps a passing reference to forming, storming, norming, performing.
That's just ... sad. Perhaps it is something /I/ need to start talking about more often.
And we actually know each other; we can give each other a hard time and actually debate ideas on merit instead of working hard to appease each other. I just read a recent note on the 37Signals Blog to that effect and it resonated with me.
And yet, given a random book on methodology or software management, you are unlikely to find anything on longevity and teamwork besides, if you are lucky, a few cliched team building exercises or perhaps a passing reference to forming, storming, norming, performing.
That's just ... sad. Perhaps it is something /I/ need to start talking about more often.
Friday, July 24, 2009
Cargo Cult Everything
ScrummerFall.
Cargo Cult Scrum.
Big Agile Up Front.
Cargo Cult Extreme Programming.
Fake Lean.
For some reason, there seem to be a lot of people who "just don't get." What's that all about?
Well, my first answer would be that our development methods are often characterized by faith. People who have a faith /believe/. Hearing failure stories is disheartening. So it's much easier to simply pry and pry until you find something you don't like, then declare the person wasn't /really/ doing Agile, or XP, or lean anyway. The technical term for this is the "No True Scotsman" fallacy, where you claim no Scotsman could have committed the crime. Then, when it turns out the criminal was Scottish, you reply "well, clearly, he isn't a true Scotsman, as no true Scotsman would commit such a crime."
But I suspect there's more to it than that.
My opinion, my very strong opinion, is that companies exist inside a system of forces. For example: Consider two software teams.
Team A works for an IT shop of a large company. The software they write is developed and released only to employees - people inside the company. The VP of accounting can send an email to the entire company that says "From now on, expenses will be tracked with our ExpenseTracker application." Additional money spent on making the software pretty will not help adoption; everyone has to use the software to get reimbursed. The small projects are relatively isolated and only have to support one browser.
Team B works for a commercial, web-based software company. They make money by viral adoption of services. They need to support every popular browser - and ever popular browser version - on the planet. The software does a lot of complex, GUI thinks that use a large amount of javascript or flash.
Now imagine you are a 'coach' with experience with company A, hired to help team B.
You come on site. You learn the process of team B.
Why, team B has this complex regression test process - what waste! We need to eliminate that, move to weekly iterations, get the logic out of the javascript so that we can test below the GUI using some sort of business-logic application.
Sure. You could try that. And fail. And someone will come along and say you are doing 'cargo cult continuous release' or "doin' it wrong" or whatever. After all, what you /really/ need is scrum, or XP, or DSDM, or ...
Wait. Stop. Please. Here's a crazy idea:
Scrum, XP, Lean, Continuius Deployment, and KanBan all evolved to solve a specific problem that your team may or may not have. They also introduce problems that may or may not be a big deal for your team.
So to understand what improvement means, we actually have to model the system of forces, come up with a strategy, simulate it, see if it will make sense, try it, then adjust it over time. That's what a great deal of this blog is about.
But Jerry Weinberg suggested we have a rule of three, and I've only come up with two explanations for ScrummerFall and the Cargo Cults. What do you think?
UPDATE: I think it's worth noting that in the example above, the consultant is not a liar, a faker, or a charlatan - he could be /wildly successful/ at Company A and yet fail at company B. As an industry, we need to equip people to cross that chasm, instead of using universal labels and best practices. (Quick litmus test for a consultant: Get to a specific recommendation, and ask when that idea might /not/ apply. If he claims it's universal, be very careful. I used to say the best practices I recommended were things like "Wear deodorant", but Ben Simo pointed out that some people are allergic to deodorant ... :-)
Cargo Cult Scrum.
Big Agile Up Front.
Cargo Cult Extreme Programming.
Fake Lean.
For some reason, there seem to be a lot of people who "just don't get." What's that all about?
Well, my first answer would be that our development methods are often characterized by faith. People who have a faith /believe/. Hearing failure stories is disheartening. So it's much easier to simply pry and pry until you find something you don't like, then declare the person wasn't /really/ doing Agile, or XP, or lean anyway. The technical term for this is the "No True Scotsman" fallacy, where you claim no Scotsman could have committed the crime. Then, when it turns out the criminal was Scottish, you reply "well, clearly, he isn't a true Scotsman, as no true Scotsman would commit such a crime."
But I suspect there's more to it than that.
My opinion, my very strong opinion, is that companies exist inside a system of forces. For example: Consider two software teams.
Team A works for an IT shop of a large company. The software they write is developed and released only to employees - people inside the company. The VP of accounting can send an email to the entire company that says "From now on, expenses will be tracked with our ExpenseTracker application." Additional money spent on making the software pretty will not help adoption; everyone has to use the software to get reimbursed. The small projects are relatively isolated and only have to support one browser.
Team B works for a commercial, web-based software company. They make money by viral adoption of services. They need to support every popular browser - and ever popular browser version - on the planet. The software does a lot of complex, GUI thinks that use a large amount of javascript or flash.
Now imagine you are a 'coach' with experience with company A, hired to help team B.
You come on site. You learn the process of team B.
Why, team B has this complex regression test process - what waste! We need to eliminate that, move to weekly iterations, get the logic out of the javascript so that we can test below the GUI using some sort of business-logic application.
Sure. You could try that. And fail. And someone will come along and say you are doing 'cargo cult continuous release' or "doin' it wrong" or whatever. After all, what you /really/ need is scrum, or XP, or DSDM, or ...
Wait. Stop. Please. Here's a crazy idea:
Scrum, XP, Lean, Continuius Deployment, and KanBan all evolved to solve a specific problem that your team may or may not have. They also introduce problems that may or may not be a big deal for your team.
So to understand what improvement means, we actually have to model the system of forces, come up with a strategy, simulate it, see if it will make sense, try it, then adjust it over time. That's what a great deal of this blog is about.
But Jerry Weinberg suggested we have a rule of three, and I've only come up with two explanations for ScrummerFall and the Cargo Cults. What do you think?
UPDATE: I think it's worth noting that in the example above, the consultant is not a liar, a faker, or a charlatan - he could be /wildly successful/ at Company A and yet fail at company B. As an industry, we need to equip people to cross that chasm, instead of using universal labels and best practices. (Quick litmus test for a consultant: Get to a specific recommendation, and ask when that idea might /not/ apply. If he claims it's universal, be very careful. I used to say the best practices I recommended were things like "Wear deodorant", but Ben Simo pointed out that some people are allergic to deodorant ... :-)
A year of columns!
When I came to Socialtext, I heard of number of voices that said the company had a history of problems; but the guy who was trying to bring me in was Chris Mcmahon. And Chris has a reputation for honestly and cogentiality that is simply unmatched in the field. Chris said it was a good deal; I believed him, I took the gig, and I am very glad I did so.
By working together every day, the familiarity I had with Chris soon turned into friendship. Two months into the gig, when I proposed a column for Software Test & Performance Magazine, I pitched Chris as my Co-Author. For some reason, the editor at ST&P went for it. (I suspect it was because of Chris.)
And for a year now, I've have the pleasure of putting out posts that say "hey, check out our newest column in Software Test & Performance Magazine (Link to a PDF). We are on page 9." (Occasionally, page 10.)
Well, STPMag.com just did a major site redesign, and the articles are indexed and available on-line free. Just go to http://www.stpmag.com and type in "Heusser" in the search box - or Follow This Link.
If you want to read articles older than that, you can find an index of my publications on my (old) website. Yes, it needs a good Saturday afternoon's worth of updates, but in the mean time, that should give you roughly a short novel's worth of content to enjoy. :-)
By working together every day, the familiarity I had with Chris soon turned into friendship. Two months into the gig, when I proposed a column for Software Test & Performance Magazine, I pitched Chris as my Co-Author. For some reason, the editor at ST&P went for it. (I suspect it was because of Chris.)
And for a year now, I've have the pleasure of putting out posts that say "hey, check out our newest column in Software Test & Performance Magazine (Link to a PDF). We are on page 9." (Occasionally, page 10.)
Well, STPMag.com just did a major site redesign, and the articles are indexed and available on-line free. Just go to http://www.stpmag.com and type in "Heusser" in the search box - or Follow This Link.
If you want to read articles older than that, you can find an index of my publications on my (old) website. Yes, it needs a good Saturday afternoon's worth of updates, but in the mean time, that should give you roughly a short novel's worth of content to enjoy. :-)
Tuesday, July 21, 2009
A brief, unfair, and wrong history of developer-testing in the 21st Century
Step 1 - Be frustrated with the process heavy and skill-free testing done on most projects
Step 2 - View testing as a clerical process to be automated
Step 3 - Ignore the input of the skilled, competent test community about what testing actually is and their experience automating it
Step 4 - Invent TDD and Automated Unit Testing, a Real Good Thing
Step 5 - Extrapolate, by the same logic, that Acceptance Tests should be automated in the same fashion, even for GUIs
Step 6 - Try It
Step 7 - Fail
Step 8 - Repeat steps 6&7 if needed
Step 9 - Realize that some GUI-driving test automation makes sense in some cases, but it is, essentially, checking, not investigating
Step 10 - Ignore the testing community, who have been saying that for ten years
Step 11 - Declare myself an expert. Try to angle for a keynote at Agile 200X.
The steps above are actually a composite of a number of people and ideas. But just maybe enough of an approximation to post ...
Step 2 - View testing as a clerical process to be automated
Step 3 - Ignore the input of the skilled, competent test community about what testing actually is and their experience automating it
Step 4 - Invent TDD and Automated Unit Testing, a Real Good Thing
Step 5 - Extrapolate, by the same logic, that Acceptance Tests should be automated in the same fashion, even for GUIs
Step 6 - Try It
Step 7 - Fail
Step 8 - Repeat steps 6&7 if needed
Step 9 - Realize that some GUI-driving test automation makes sense in some cases, but it is, essentially, checking, not investigating
Step 10 - Ignore the testing community, who have been saying that for ten years
Step 11 - Declare myself an expert. Try to angle for a keynote at Agile 200X.
The steps above are actually a composite of a number of people and ideas. But just maybe enough of an approximation to post ...
Speaking in Ann Arbor - July 22nd - Evening
I just signed up to speak on Agile Testing in Ann Arbor July 22nd, 2009.
The event starts at 6:15 and includes dinner for just seven bucks.
Here's my abstract:
How then, should we test?
Traditional ("waterfall") development relies on a single test/fix/retest cycle at the end of the process. Agile and iterative development implies dozens of quick iterations per year - vastly increasing the testing burden. Matt Heusser will discuss the dynamics of software testing on agile projects, some of the more popular approaches, and finally lay out how his team does testing at Socialtext, including a brief demo of some of the dev/test toolset they have development. Matt will make some bold statements about software testing that you may -- or may not - agree with. The only thing he can promise is that you'll leave the room thinking - and you certainly will not be bored
The event starts at 6:15 and includes dinner for just seven bucks.
Here's my abstract:
How then, should we test?
Traditional ("waterfall") development relies on a single test/fix/retest cycle at the end of the process. Agile and iterative development implies dozens of quick iterations per year - vastly increasing the testing burden. Matt Heusser will discuss the dynamics of software testing on agile projects, some of the more popular approaches, and finally lay out how his team does testing at Socialtext, including a brief demo of some of the dev/test toolset they have development. Matt will make some bold statements about software testing that you may -- or may not - agree with. The only thing he can promise is that you'll leave the room thinking - and you certainly will not be bored
Friday, July 17, 2009
Meaningful Metrics
Recently, a few people have pointed to me as completely opposed to metrics in software development or testing.
I wouldn't say completely opposed - for example, when the technical people gather their own metrics in order to understand what is going on and improve, really good things can happen.
No, I would say concerned about the use of simplistic measures that fail to measure the entire scope of work, or or act in the place of some harder to measure thing(*), or lack construct validity(**).
Most of the ardent fans of software metrics like to quote Tom DeMacro, and his book Controlling Software Projects. For example "You can't control what you can't measure." Now, for years, I've been confused by these quotes - as DeMarco spent the second half of his career writing books that refute the premise behind that quote, such as Peopleware, The Deadline, Waltzing With Bears, Adrenaline Junkies. He even titled one of them Slack. No, I'm serious.
So I always found the copious use of that one quote a little unsettling.
Well, folks, there's good news. Twenty years after he wrote "You can't control what you can't measure", Tom DeMarco just wrote a column for IEEE software explaining his current thoughts. Here's a quote:
"My early metrics book, Controlling Software Projects played a role in the way many budding software engineers quantified work and planned their projects. In my reflective mood, I'm wondering, was its advice correct at the time, is it still relevant, and do I still believe that metrics are a must for any successful development effort? My answers are no, no, and no."
Now, DeMarco isn't saying that Metrics and Control are /bad/, as much as they may not be the brass ring we should be striving for in software work.
So what should we be striving for? DeMarco appeals to us to shoot for the big idea - the multi-billion dollar concept, where if you blow your budget by 200% it still changes the world and enables you to retire.
Ok, fair enough; that's a decent chunk of the focus of this blog. But let's go back for a moment. What metrics do I like?
Well, for a software company, I do like revenue and expenses as metrics. They are real, hard things that have construct validity, they are not in the place of something else, and without them you go out of business. But that's not the only thing I like. What if we measured the number of heart attacks and divorces our teams experienced, and expected them to go down?
Now, you might have concerns about that for legal or PC reasons (discriminating against divorced people) - but I think it's really interesting as a thought experiment - as the idea that the whole person matters. And at least one company has done it.
Bravo, Obtiva. Bravo.
--heusser
(*) - Classic example as lines of code used to approximate productivity
(**) - Not all "test cases" are created equal
UPDATE: I just have a conversation with Markus Gaertner, a german collegue I work with. He had never experienced this desire for metrics to evaluate and control that I discussed above. We talked for some time about dysfunction and he did a blog post on it. My post certainly assumed a few bits of shared understanding about metrics - and I could be wrong about my assumptions. If you don't "get it" either, let me know in comments, and I can expand.
I wouldn't say completely opposed - for example, when the technical people gather their own metrics in order to understand what is going on and improve, really good things can happen.
No, I would say concerned about the use of simplistic measures that fail to measure the entire scope of work, or or act in the place of some harder to measure thing(*), or lack construct validity(**).
Most of the ardent fans of software metrics like to quote Tom DeMacro, and his book Controlling Software Projects. For example "You can't control what you can't measure." Now, for years, I've been confused by these quotes - as DeMarco spent the second half of his career writing books that refute the premise behind that quote, such as Peopleware, The Deadline, Waltzing With Bears, Adrenaline Junkies. He even titled one of them Slack. No, I'm serious.
So I always found the copious use of that one quote a little unsettling.
Well, folks, there's good news. Twenty years after he wrote "You can't control what you can't measure", Tom DeMarco just wrote a column for IEEE software explaining his current thoughts. Here's a quote:
"My early metrics book, Controlling Software Projects played a role in the way many budding software engineers quantified work and planned their projects. In my reflective mood, I'm wondering, was its advice correct at the time, is it still relevant, and do I still believe that metrics are a must for any successful development effort? My answers are no, no, and no."
Now, DeMarco isn't saying that Metrics and Control are /bad/, as much as they may not be the brass ring we should be striving for in software work.
So what should we be striving for? DeMarco appeals to us to shoot for the big idea - the multi-billion dollar concept, where if you blow your budget by 200% it still changes the world and enables you to retire.
Ok, fair enough; that's a decent chunk of the focus of this blog. But let's go back for a moment. What metrics do I like?
Well, for a software company, I do like revenue and expenses as metrics. They are real, hard things that have construct validity, they are not in the place of something else, and without them you go out of business. But that's not the only thing I like. What if we measured the number of heart attacks and divorces our teams experienced, and expected them to go down?
Now, you might have concerns about that for legal or PC reasons (discriminating against divorced people) - but I think it's really interesting as a thought experiment - as the idea that the whole person matters. And at least one company has done it.
Bravo, Obtiva. Bravo.
--heusser
(*) - Classic example as lines of code used to approximate productivity
(**) - Not all "test cases" are created equal
UPDATE: I just have a conversation with Markus Gaertner, a german collegue I work with. He had never experienced this desire for metrics to evaluate and control that I discussed above. We talked for some time about dysfunction and he did a blog post on it. My post certainly assumed a few bits of shared understanding about metrics - and I could be wrong about my assumptions. If you don't "get it" either, let me know in comments, and I can expand.
Monday, July 13, 2009
Beautiful Testing - II
Here's the first bit of my chapter of Beautiful Testing. I'd be interested in your thoughts ...
Peeling the Glass Onion at Socialtext
"I don't understand why we thought this was going to work in the first place" - James Mathis, 2004
It's not business ... it's personal
I’ve spent my entire adult life developing, testing, and managing software projects. In those years, I've learned a few things about our field:
(1) Software Testing, as it is practiced in the field, bears very little resemblance to how it is taught in the classroom - or even described at some industry presentations
(2) There are multiple perspectives on what good software testing is and how to do it well, which means -
(3) There are no 'best practices' - no single way to view testing or do it that will allow you to be successful in all environments - but there are rules of thumb that can guide the learner
beyond that, in business software development, I would add a few things more. First, there is a sharp difference between checking[1], a sort of clerical, repeatable process to make sure things are fine, and investigating – which is a feedback-driven process.
Checking can be automated, or, at least, parts of it can. With small, discrete units, it is possible for a programmer to select inputs and compare them to outputs automatically. When we combine those units we begin to see complexity.
Imagine, for example, a simple calculator program that has a very small memory leak every time we press the clear button. It might behave fine if we test each operation independently, but when we try to use the calculator for half and hour it seems to break down without reason.
Checking can not find those types of bugs. Investigation might. Or, better yet, in this example, a static inspector looking for memory leaks.
And that’s the point. Software exposes us to a variety of risks. We will have to use a variety of techniques to limit those risks. Because there are no “best practices”, I can’t tell you what to do, but I can tell you what we have done, at Socialtext, and why we like it – what makes those practices beautiful to us.
This positions testing as a form of risk management. The company invests a certain amount of time and money in testing in order to get information - which will decrease the chance of a bad release. There is an entire business discipline around risk management; insurance companies practice it every day. It turns out that testing for it's own sake meets the exact definition of risk management. We'll revisit risk management when we talk about testing at Socialtext, but first, let's talk about beauty.
Tester remains on-stage; enter beauty, stage right
Are you skeptical yet? If you are, I can't say I blame you. To many people, the word "testing" brings up images of drop-dead simple pointing and clicking, or following a boring script written by someone else. It's a simple job, best done by simple people who, well, at least you don't have to pay them much. I think there's something wrong with that.
Again, that isn’t a picture of critical investigation – it’s checking. And checking certainly isn’t beautiful, by any stretch of the word. And Beauty is important.
Let me explain.
For my formative years as a developer, I found that I had a conflict with my peers and superiors about the way we developed software. Sometimes I attributed this to growing up in the east coast vs. the midwest, and sometimes to the fact that my degree was not in Computer Science but Mathematics[2]. So, being young and insecure, I went back to school at night and earned a Master's Degree in Computer Information Systems to "catch up", but still I had these cultural arguments about how to develop software. I wanted simple projects, whereas my team-mates wanted projects done "right" or "extensible" or "complete."
Then one day I realized: They had never been taught about beauty, nor that beauty was inherently good. While I had missed a class or two in my concentration in computer science - they also missed something I had learned in Mathematics - an appreciation of aesthetics. Sometime later I read Things a Computer Scientists rarely talks about by Dr. Donald Knuth, and found words to articulate this idea. Knuth said that mathematicians and computer scientists need similar basic skills: they need to be able to keep many variables in their head, and they need to be able to jump up and down a chain of abstraction very quickly to solve complex programs. According to Knuth, the mathematician is searching for truth - ideas that are consistently and universally correct - while the computer scientists can simply hack a conditional[3] in and move on.
But mathematics is more than that - to solve any problem in math, you simplify it. Take the simple algebra problem:
2X - 6 = 0
So we add six to each side and get 2X = 6 and we divide by two and get X=3. At every step in the process, we make the equation more simple. In fact, the simplest expression of any formula is the answer. There may be times when you get something like X=2Y; you haven't solved for X or Y, but you've taken the problem down to it's simplest possible form and you get full credit. And the best example of solving a problem of this nature I can think of is the proof.
I know, I know, please don't fall asleep on me here or skip down. To a mathematician, a good proof is a work of art - it's the stuff of pure logic, distilled into symbols[4]. Two of the highest division courses I took at Salisbury University were number theory and the history of mathematics from Dr. Homer Austin. They weren't what you would think. Number theory was basically re-creating the great proofs of history - taking a formula that seemed to make sense, proving it was true for value one. Then you provide that if any number is true, then value N+1 is true - which means the next one is true, which means ... you get it. That's called proof by induction. Number theory was trying to understand how the elements of the universe were connected - such as the Fibonacci sequence - which appears in nature on a conch shell - or how to predict what the next prime number will be, or why Pi shows up in so many places.
And, every now and again, Dr. Homer Austin would step back from the blackboard, look at the work, and just say "Now ... there's a beautiful equation." The assertion was simple: Beauty and simplicity were inherently good.
You could tell this in your work because the simplest answer was correct. When you got the wrong answer, your professor could look at your work and show you the ugly line - the hacky line - the one line that looked more complex than the one above it. He might say "Right there Matt - that's where you went off the rails[5]."
By the end of the semester, we could see it too. For that, I am, quite honestly, in his debt[6].
Of course, you can learn to appreciate beauty from any discipline that deals in abstraction and multiple variables. You could learn it from chess, or chemistry, aerospace engineering, or music and the arts[7]. My experience was that, at least in the 1990's, it was largely missing from computer science. Instead of simplicity, we celebrated complexity. Instead of focusing on value to customers, more senior programmers were writing the complex frameworks and architectures, leaving the junior developers to be mere implementers. The goal was not to deliver value quickly but instead to develop a castle in the sky. We even invented a term, "gold plating", for when a developer found a business problem too simple and had to add his own bells and whistles to the system, or, perhaps, instead of solving one problem and solving it well, could create an extensible framework to solve a much larger number of generic business problems.
Joel Spolsky[8] would call this person an "architecture astronaut", in that they get so abstract, they actually "cut off the air supply" of the business. In the back of my mind I could hear the voice of Doctor Austin saying "right there - there - is where your project went off the rails."
Ten years later, we've learned a great deal. We have a growing body of knowledge of how to apply beauty to development - O'Reilly even has a book on the subject, But testing - testing is inherently ugly, right? Aside from developer-facing testing, like TDD, testing is no fun at best and rather - have - a - tooth - pulled - with - no - anesthetic at worst, right?
No, I don't think so. In math we have this idea of prima facie evidence - that an argument can be true on it's face and not require proof. For example, there is no proof that you can add one to both sides of an equation - or double both sides - and the equation remains true. We accept this at face value - prima facie - because it's obvious. All of our efforts in math build on top of this basic prima facie (or "axiomatic") arguments [9].
So here's one for you: Boring, brain-dead, gag-me-with-a-spoon testing is /bad/ testing – it’s merely checking. And it is not beautiful. One thing we know about ugly solutions is that they are wrong; they've gone off the rails.
We can do better.
References:
[1] My colleague and friend, Micheal Bolton, is the first person I am aware of to make this distinction, and I believe he deserves a fair amount of credit for it
[2] I am a member of the context-driven school of software testing, a community of people who align around such ideas, including "there are no best practices" - www.context-driven-testing.org.
[3] Strictly speaking, I have a Bachelor's degree in Mathematics with a concentration in Computer Science. A concentration is more than a minor but less than a major, so you could argue that I’m basically a dual major - or argue that I'm not quite either one. The upshot of that was that I never took compiler construction, and, because of that, had an inferiority complex that fueled a massive amount of time and energy into learning. Overall, I'd say it could be worse.
[4] "Conditional" is a fancy word for an IF/THEN/ELSE statement block
[5] I am completely serious about the beauty of proofs. For years, I used to ask people I met with any kind of mathematics background what their favorite math proof was. Enough blank stares later and I stopped asking. As for mine, I’m stuck between two: my favorites are the proof of the limit of the sum of 1/2^N for all positive integers, or Newton's proof of integration, take your pick. (Rob Sabourin is one notable exception. I asked him his favorite, and he said he was stuck between two …)
[6] No pun on Ruby intended. I am a perl hacker.
[7] That, and Dr. Kathleen Shannon, Dr. Mohammad Mouzzam, Professor Dean Defino, and Professor Maureen Malone
[8] My co-worker and occasional writing partner, Chris McMahon has a good bit to say about testing as a performing art. You should check out … oh, wait, he left Socialtext and has his own chapter. All right, then.
[9] http://www.joelonsoftware.com/articles/fog0000000018.html
Peeling the Glass Onion at Socialtext
"I don't understand why we thought this was going to work in the first place" - James Mathis, 2004
It's not business ... it's personal
I’ve spent my entire adult life developing, testing, and managing software projects. In those years, I've learned a few things about our field:
(1) Software Testing, as it is practiced in the field, bears very little resemblance to how it is taught in the classroom - or even described at some industry presentations
(2) There are multiple perspectives on what good software testing is and how to do it well, which means -
(3) There are no 'best practices' - no single way to view testing or do it that will allow you to be successful in all environments - but there are rules of thumb that can guide the learner
beyond that, in business software development, I would add a few things more. First, there is a sharp difference between checking[1], a sort of clerical, repeatable process to make sure things are fine, and investigating – which is a feedback-driven process.
Checking can be automated, or, at least, parts of it can. With small, discrete units, it is possible for a programmer to select inputs and compare them to outputs automatically. When we combine those units we begin to see complexity.
Imagine, for example, a simple calculator program that has a very small memory leak every time we press the clear button. It might behave fine if we test each operation independently, but when we try to use the calculator for half and hour it seems to break down without reason.
Checking can not find those types of bugs. Investigation might. Or, better yet, in this example, a static inspector looking for memory leaks.
And that’s the point. Software exposes us to a variety of risks. We will have to use a variety of techniques to limit those risks. Because there are no “best practices”, I can’t tell you what to do, but I can tell you what we have done, at Socialtext, and why we like it – what makes those practices beautiful to us.
This positions testing as a form of risk management. The company invests a certain amount of time and money in testing in order to get information - which will decrease the chance of a bad release. There is an entire business discipline around risk management; insurance companies practice it every day. It turns out that testing for it's own sake meets the exact definition of risk management. We'll revisit risk management when we talk about testing at Socialtext, but first, let's talk about beauty.
Tester remains on-stage; enter beauty, stage right
Are you skeptical yet? If you are, I can't say I blame you. To many people, the word "testing" brings up images of drop-dead simple pointing and clicking, or following a boring script written by someone else. It's a simple job, best done by simple people who, well, at least you don't have to pay them much. I think there's something wrong with that.
Again, that isn’t a picture of critical investigation – it’s checking. And checking certainly isn’t beautiful, by any stretch of the word. And Beauty is important.
Let me explain.
For my formative years as a developer, I found that I had a conflict with my peers and superiors about the way we developed software. Sometimes I attributed this to growing up in the east coast vs. the midwest, and sometimes to the fact that my degree was not in Computer Science but Mathematics[2]. So, being young and insecure, I went back to school at night and earned a Master's Degree in Computer Information Systems to "catch up", but still I had these cultural arguments about how to develop software. I wanted simple projects, whereas my team-mates wanted projects done "right" or "extensible" or "complete."
Then one day I realized: They had never been taught about beauty, nor that beauty was inherently good. While I had missed a class or two in my concentration in computer science - they also missed something I had learned in Mathematics - an appreciation of aesthetics. Sometime later I read Things a Computer Scientists rarely talks about by Dr. Donald Knuth, and found words to articulate this idea. Knuth said that mathematicians and computer scientists need similar basic skills: they need to be able to keep many variables in their head, and they need to be able to jump up and down a chain of abstraction very quickly to solve complex programs. According to Knuth, the mathematician is searching for truth - ideas that are consistently and universally correct - while the computer scientists can simply hack a conditional[3] in and move on.
But mathematics is more than that - to solve any problem in math, you simplify it. Take the simple algebra problem:
2X - 6 = 0
So we add six to each side and get 2X = 6 and we divide by two and get X=3. At every step in the process, we make the equation more simple. In fact, the simplest expression of any formula is the answer. There may be times when you get something like X=2Y; you haven't solved for X or Y, but you've taken the problem down to it's simplest possible form and you get full credit. And the best example of solving a problem of this nature I can think of is the proof.
I know, I know, please don't fall asleep on me here or skip down. To a mathematician, a good proof is a work of art - it's the stuff of pure logic, distilled into symbols[4]. Two of the highest division courses I took at Salisbury University were number theory and the history of mathematics from Dr. Homer Austin. They weren't what you would think. Number theory was basically re-creating the great proofs of history - taking a formula that seemed to make sense, proving it was true for value one. Then you provide that if any number is true, then value N+1 is true - which means the next one is true, which means ... you get it. That's called proof by induction. Number theory was trying to understand how the elements of the universe were connected - such as the Fibonacci sequence - which appears in nature on a conch shell - or how to predict what the next prime number will be, or why Pi shows up in so many places.
And, every now and again, Dr. Homer Austin would step back from the blackboard, look at the work, and just say "Now ... there's a beautiful equation." The assertion was simple: Beauty and simplicity were inherently good.
You could tell this in your work because the simplest answer was correct. When you got the wrong answer, your professor could look at your work and show you the ugly line - the hacky line - the one line that looked more complex than the one above it. He might say "Right there Matt - that's where you went off the rails[5]."
By the end of the semester, we could see it too. For that, I am, quite honestly, in his debt[6].
Of course, you can learn to appreciate beauty from any discipline that deals in abstraction and multiple variables. You could learn it from chess, or chemistry, aerospace engineering, or music and the arts[7]. My experience was that, at least in the 1990's, it was largely missing from computer science. Instead of simplicity, we celebrated complexity. Instead of focusing on value to customers, more senior programmers were writing the complex frameworks and architectures, leaving the junior developers to be mere implementers. The goal was not to deliver value quickly but instead to develop a castle in the sky. We even invented a term, "gold plating", for when a developer found a business problem too simple and had to add his own bells and whistles to the system, or, perhaps, instead of solving one problem and solving it well, could create an extensible framework to solve a much larger number of generic business problems.
Joel Spolsky[8] would call this person an "architecture astronaut", in that they get so abstract, they actually "cut off the air supply" of the business. In the back of my mind I could hear the voice of Doctor Austin saying "right there - there - is where your project went off the rails."
Ten years later, we've learned a great deal. We have a growing body of knowledge of how to apply beauty to development - O'Reilly even has a book on the subject, But testing - testing is inherently ugly, right? Aside from developer-facing testing, like TDD, testing is no fun at best and rather - have - a - tooth - pulled - with - no - anesthetic at worst, right?
No, I don't think so. In math we have this idea of prima facie evidence - that an argument can be true on it's face and not require proof. For example, there is no proof that you can add one to both sides of an equation - or double both sides - and the equation remains true. We accept this at face value - prima facie - because it's obvious. All of our efforts in math build on top of this basic prima facie (or "axiomatic") arguments [9].
So here's one for you: Boring, brain-dead, gag-me-with-a-spoon testing is /bad/ testing – it’s merely checking. And it is not beautiful. One thing we know about ugly solutions is that they are wrong; they've gone off the rails.
We can do better.
References:
[1] My colleague and friend, Micheal Bolton, is the first person I am aware of to make this distinction, and I believe he deserves a fair amount of credit for it
[2] I am a member of the context-driven school of software testing, a community of people who align around such ideas, including "there are no best practices" - www.context-driven-testing.org.
[3] Strictly speaking, I have a Bachelor's degree in Mathematics with a concentration in Computer Science. A concentration is more than a minor but less than a major, so you could argue that I’m basically a dual major - or argue that I'm not quite either one. The upshot of that was that I never took compiler construction, and, because of that, had an inferiority complex that fueled a massive amount of time and energy into learning. Overall, I'd say it could be worse.
[4] "Conditional" is a fancy word for an IF/THEN/ELSE statement block
[5] I am completely serious about the beauty of proofs. For years, I used to ask people I met with any kind of mathematics background what their favorite math proof was. Enough blank stares later and I stopped asking. As for mine, I’m stuck between two: my favorites are the proof of the limit of the sum of 1/2^N for all positive integers, or Newton's proof of integration, take your pick. (Rob Sabourin is one notable exception. I asked him his favorite, and he said he was stuck between two …)
[6] No pun on Ruby intended. I am a perl hacker.
[7] That, and Dr. Kathleen Shannon, Dr. Mohammad Mouzzam, Professor Dean Defino, and Professor Maureen Malone
[8] My co-worker and occasional writing partner, Chris McMahon has a good bit to say about testing as a performing art. You should check out … oh, wait, he left Socialtext and has his own chapter. All right, then.
[9] http://www.joelonsoftware.com/articles/fog0000000018.html
Friday, July 10, 2009
The Trick (A Rant)
If you are working within one company, doing internal development, getting user adoption is usually pretty easy. The Vice President of operations says something like:
"We wrote some software you need to process claims. Use it."
And people use it. They may not like it, but they use it.
Likewise, if you are making an application that will be /paid for/ by an executive, adoption is similarly easy. You sell the executive, he pays for it, a memo goes out that says "henceforth, all email will be done by Lotus Notes."
In both cases, you've got a monopoly.
But sometimes, you don't have a monopoly. Say you are selling software to individuals, or perhaps giving away a product or service for free in the hopes that it will be used so wildly that customer organization will want to purchase support - even if they already have have some competing product.
In that case, the words of jwz, (mild obscentiy warning after the link) - you've got make software people actually want.
It turns out, that's the trick. Make software people will actually want to use.
You say, "but Matt, that's so obvious!" - if it's so obvious, why don't more people do it?
Twitter and Facebook don't have workflow policies. They have open ended ways of helping people get stuff done.
Just something to think about.
Update: I could add that facebook might not even help you get stuff done! Yet it stuck anyway. Other updates: Beautiful Testing II coming next week. As for the scholarship, talk to the people at SoftwareTestingClub.com; they've allready got the money. :-)
"We wrote some software you need to process claims. Use it."
And people use it. They may not like it, but they use it.
Likewise, if you are making an application that will be /paid for/ by an executive, adoption is similarly easy. You sell the executive, he pays for it, a memo goes out that says "henceforth, all email will be done by Lotus Notes."
In both cases, you've got a monopoly.
But sometimes, you don't have a monopoly. Say you are selling software to individuals, or perhaps giving away a product or service for free in the hopes that it will be used so wildly that customer organization will want to purchase support - even if they already have have some competing product.
In that case, the words of jwz, (mild obscentiy warning after the link) - you've got make software people actually want.
It turns out, that's the trick. Make software people will actually want to use.
You say, "but Matt, that's so obvious!" - if it's so obvious, why don't more people do it?
Twitter and Facebook don't have workflow policies. They have open ended ways of helping people get stuff done.
Just something to think about.
Update: I could add that facebook might not even help you get stuff done! Yet it stuck anyway. Other updates: Beautiful Testing II coming next week. As for the scholarship, talk to the people at SoftwareTestingClub.com; they've allready got the money. :-)
Thursday, July 09, 2009
July STPMag is out -
The people at Software Test&Performance Magazine spent a considerable amount of time and effort re-designing the magazine - and it shows. The July issue is solid, and yes, our column still appears on page 10.
More than that, check out the new ST&P Website, with more to come in the months to come.
Seriously, please, check out the column and let us know what you'd like to see in future months.
More than that, check out the new ST&P Website, with more to come in the months to come.
Seriously, please, check out the column and let us know what you'd like to see in future months.
Thursday, July 02, 2009
Beatiful Testing - Part I
My chapter on the book Beautiful Testing: Leading Professionals Reveal How They Improve Software is nearly complete. In fact, you can pre-order the book from Amazon right now.
But before you buy it, wouldn't you like to know what I'm going to say?
For that matter, it's just pretty expensive to be a tester right now. Better Software Magazine just stopped complimentary print delivery, Software Testing Club is going to charge a membership fee, and now Matt wants us to buy a book. I can hear the chorus of "thanks buddy" in my head, believe me. :-)
I can understand if you're skeptical. Here's what I am doing to help:
(1) All my royalties for the Beautiful Testing book will be donated to a charity - nothing but nets, that purchases mosquito nets for Africans. In fact, so will every other author for the book.
(2) The Good Lord has been good to me. I'm going to purchase TWO memberships in software testing club, and work with them to develop a competition to give the second one away.
(3) I've been working with O'Reilly, the publishers of Beautiful Testing. I can give away some of my chapter (for free) right now as a teaser, and more after publication.
Watch this space for my next post!
But before you buy it, wouldn't you like to know what I'm going to say?
For that matter, it's just pretty expensive to be a tester right now. Better Software Magazine just stopped complimentary print delivery, Software Testing Club is going to charge a membership fee, and now Matt wants us to buy a book. I can hear the chorus of "thanks buddy" in my head, believe me. :-)
I can understand if you're skeptical. Here's what I am doing to help:
(1) All my royalties for the Beautiful Testing book will be donated to a charity - nothing but nets, that purchases mosquito nets for Africans. In fact, so will every other author for the book.
(2) The Good Lord has been good to me. I'm going to purchase TWO memberships in software testing club, and work with them to develop a competition to give the second one away.
(3) I've been working with O'Reilly, the publishers of Beautiful Testing. I can give away some of my chapter (for free) right now as a teaser, and more after publication.
Watch this space for my next post!
Subscribe to:
Posts (Atom)