Warning: This is one of my, how shall I say it - more "opinionated" posts. It is geared toward a specific audience; I posted it as a reply on the Agile-Testing Discussion list.
Concepts like "right" and "wrong" requirements "before design can begin" assume that:
(1) The customer can speak with one voice,
(2) The customer can know what he wants without first seeing an example,
(3) The customer never changes his mind,
(4) The market never changes: The "right" requirements last week, last month, or next year are all the same.
(5) Design and Requirements are two separate and distinct processes that do not feed each other. That is to say, it is not possible, or at least not desirable, to innovate on the specification with interesting design ideas. (For example: "The spec says do A,B,C,D which might take a year. But just A,B,C we could do in two months. Can we do just A,B,C, deliver it, and see if we need D at all?")
Now let me ask: for any given project you are working on, are statements one through five actually true?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Wednesday, December 31, 2008
Monday, December 29, 2008
My Agile 2009 Proposals
Bit by bit, the Agile movement grew until it was massive. So now you have people - thousands of them - eager to learn how to develop software the 'agile' way.
And testing is part of that.
I can either choose to be a part of that discussion, or, well ... not. So I am wading in, proposing several sessions at the Agile 2009 conference in Chicago. That process is interesting because it is iterative - you propose sessions, get feedback, and have an opportunity to revise your proposals before they are accepted or rejected.
So I'd like to tell you a bit about my proposals. The abstracts below include a link to more material. To follow a link, you'll need to create a free account at the Agile2009.org website. Once you have an account and have logged it, you can follow the links to read detailed outlines for each talk, and comment or review talks if you desire.
I would even go so far as to say that if you want a quick primer on the state of agile, you might just want to peruse the proposals.
Next Generation Test Workshop
Level: Expert
Stage: The Agile Frontier
Pre-defined acceptance tests, TDD, Mocks, BDD, ATDD … which is right? Matt Heusser argues that testing should be a strategy used to meet the needs of your business at this point in time, and that that strategy needs to evolve over time. This workshop will combine experiences to discuss what actually testing approaches are working in the field, and why, and may will hope to create advice for teams adapting to agile-testing … or trying to take things to the next level.
Adding Exploratory Testing to your agile process
Level: Intermediate
Stage: Testing
What is exploratory testing, and how do I fit it into my Agile Process? Matt Heusser introduces exploratory testing as a discipline that can complement and extend other forms of testing. He’ll discuss what exploratory testing is, the problems it solves, the kind of bugs it is good at finding, and how it might fit into a portfolio of test strategies. Students will leave with a variety of exercises and concepts they can use to explain exploratory testing to others, and sharpen each other skills.
How do I do this 'Agile Testing' Thing?
Level: Introductory
Stage: New to Agile
Agile methods view testing as something that can happen continuously, throughout a project, often before coding even begins. In addition, the Agile concept of testing is a much larger tent, as the benefits spill out to include better requirements, better communication on the team, and design benefits for the technologists. This means that more groups want to get different things from software testing. Matt Heusser presents one way to do it in practice, based on his experiences at Socialtext, Priority Health, Open Source, and community involvement.
Technical Debt: Beyond Cliches
Level: Practicing
Stage: Coaching
Like any addiction cycle, Technical Debt is hard to break because it provides absolute and certain benefits today with a deferred and uncertain future pain. So how can we prevent, or stop the merry-go-round of technical debt?
To answer this question, Matt Heusser organized the Workshop on Technical Debt, which was funded by the Agile Alliance and ran in the summer of 2008. This presentation will cover some lessons learned from the workshop, combined with Matt’s own personal perspectives and experiences on the issue.
So there you have it. I am vaguely considering a proposal on massively distributed agile teams, perhaps for the 'frontier' stage.
I am interested in your feedback, both at agile2009.org and here. I believe the information I have to present could complement the current agile literature well - and that we could possibly move things forward a bit with the workshop - and, yes, that I could learn a thing or two.
Hope, after all, spring eternal.
More to come.
And testing is part of that.
I can either choose to be a part of that discussion, or, well ... not. So I am wading in, proposing several sessions at the Agile 2009 conference in Chicago. That process is interesting because it is iterative - you propose sessions, get feedback, and have an opportunity to revise your proposals before they are accepted or rejected.
So I'd like to tell you a bit about my proposals. The abstracts below include a link to more material. To follow a link, you'll need to create a free account at the Agile2009.org website. Once you have an account and have logged it, you can follow the links to read detailed outlines for each talk, and comment or review talks if you desire.
I would even go so far as to say that if you want a quick primer on the state of agile, you might just want to peruse the proposals.
Next Generation Test Workshop
Level: Expert
Stage: The Agile Frontier
Pre-defined acceptance tests, TDD, Mocks, BDD, ATDD … which is right? Matt Heusser argues that testing should be a strategy used to meet the needs of your business at this point in time, and that that strategy needs to evolve over time. This workshop will combine experiences to discuss what actually testing approaches are working in the field, and why, and may will hope to create advice for teams adapting to agile-testing … or trying to take things to the next level.
Adding Exploratory Testing to your agile process
Level: Intermediate
Stage: Testing
What is exploratory testing, and how do I fit it into my Agile Process? Matt Heusser introduces exploratory testing as a discipline that can complement and extend other forms of testing. He’ll discuss what exploratory testing is, the problems it solves, the kind of bugs it is good at finding, and how it might fit into a portfolio of test strategies. Students will leave with a variety of exercises and concepts they can use to explain exploratory testing to others, and sharpen each other skills.
How do I do this 'Agile Testing' Thing?
Level: Introductory
Stage: New to Agile
Agile methods view testing as something that can happen continuously, throughout a project, often before coding even begins. In addition, the Agile concept of testing is a much larger tent, as the benefits spill out to include better requirements, better communication on the team, and design benefits for the technologists. This means that more groups want to get different things from software testing. Matt Heusser presents one way to do it in practice, based on his experiences at Socialtext, Priority Health, Open Source, and community involvement.
Technical Debt: Beyond Cliches
Level: Practicing
Stage: Coaching
Like any addiction cycle, Technical Debt is hard to break because it provides absolute and certain benefits today with a deferred and uncertain future pain. So how can we prevent, or stop the merry-go-round of technical debt?
To answer this question, Matt Heusser organized the Workshop on Technical Debt, which was funded by the Agile Alliance and ran in the summer of 2008. This presentation will cover some lessons learned from the workshop, combined with Matt’s own personal perspectives and experiences on the issue.
So there you have it. I am vaguely considering a proposal on massively distributed agile teams, perhaps for the 'frontier' stage.
I am interested in your feedback, both at agile2009.org and here. I believe the information I have to present could complement the current agile literature well - and that we could possibly move things forward a bit with the workshop - and, yes, that I could learn a thing or two.
Hope, after all, spring eternal.
More to come.
Monday, December 22, 2008
New Testing Challenge - V
(The next installment in the New Test Challenge Series)
My next steps on the testing challenge depend on the reaction I get to the first attempt.
Let's say I get failures accross the board. Massive failures, but the manager is overwhelmed. He meets with me and feels genuinely chagrined that this could happen.
Well, I'd start working on a recommendation for a training plan for management, which would cascade into the cashiers. I'd also suggest a program of check-behinds or spot inspections, so we could find defects earlier and create a positive feedback loop. I'd recommend a re-inspection after the team had time to actually correct it's mistakes.
If I had lots of defects and an obstinent, blocking store manager, well, I wouldn't try to inflict help. I would prepare a report to management, which I would deliver personally with a story, and /try/ to give them feedbac that was a positive as possible. I would also try to work with the manager instead of against him.
What if there are only a few mistakes - or none? Then we could use the time allotted to do some of the complex tests that people like Jay Phillips have recommended - buying non-alcoholic beer while under age, or checking for items that have no tag - maybe putting a clearly wrong tag on an item to see if I'm caught, buying four times at the "five for a dollar price", and so on.
That is to say, once I'm sure that the basic business rules are in place, I would check the implicit ones that are rarely defined.
But I'n never really sure those rules are in place - my first scan was only a few random times. If I had the money, we could do more sweeps, for example, off-hours (when the part-timers are on shift), or when the lead cashiers go on break and the baggers run the register.
Besides testing, I would be interested in the training and reinforcement mechanisms that were in place, and I would also be interested in headquarters margin for error - to help determine if I believe the /process/ was capable.
Remember, this is the 1980's - back when it was the cashier that needed to know the rules, not the scanning software. It may be, for example, that the process is capable, but Weiss's simply does not pay enough for the kind of quality talent it needs to remember and enforce those rules. Oh, we can create cheat sheets and reinforcements, but a pay scale that encourages retention might go pretty far.
I think that about wraps up the challenge for me - but if you have questions, leave comments, we can keep iterating on this forever.
What's Next?
Besides the occasional request for comments, I try not to ask for too much too often from my blog readers. If you enjoy Creative Chaos and think it has value for a larger audience, please consider registering as a reviewer for the Agile2009 Conference and reviewing my proposals. (To review, you'll have to create a free account and search for Heusser.)
If you're looking for some cutting edge ideas on the cutting room floor, reviewing Agile 2009 proposals is one way to see them. You see, the Agile 2009 submission process is entirely open. In theory, submitters get feedback they can use to improve the submission, along with feedback, for the momth of January - then, based on voting, the submissions are accepted or not.
Creative Chaos readers are a part of the "in theory." At least, I hope so.
Regards,
--heusser
My next steps on the testing challenge depend on the reaction I get to the first attempt.
Let's say I get failures accross the board. Massive failures, but the manager is overwhelmed. He meets with me and feels genuinely chagrined that this could happen.
Well, I'd start working on a recommendation for a training plan for management, which would cascade into the cashiers. I'd also suggest a program of check-behinds or spot inspections, so we could find defects earlier and create a positive feedback loop. I'd recommend a re-inspection after the team had time to actually correct it's mistakes.
If I had lots of defects and an obstinent, blocking store manager, well, I wouldn't try to inflict help. I would prepare a report to management, which I would deliver personally with a story, and /try/ to give them feedbac that was a positive as possible. I would also try to work with the manager instead of against him.
What if there are only a few mistakes - or none? Then we could use the time allotted to do some of the complex tests that people like Jay Phillips have recommended - buying non-alcoholic beer while under age, or checking for items that have no tag - maybe putting a clearly wrong tag on an item to see if I'm caught, buying four times at the "five for a dollar price", and so on.
That is to say, once I'm sure that the basic business rules are in place, I would check the implicit ones that are rarely defined.
But I'n never really sure those rules are in place - my first scan was only a few random times. If I had the money, we could do more sweeps, for example, off-hours (when the part-timers are on shift), or when the lead cashiers go on break and the baggers run the register.
Besides testing, I would be interested in the training and reinforcement mechanisms that were in place, and I would also be interested in headquarters margin for error - to help determine if I believe the /process/ was capable.
Remember, this is the 1980's - back when it was the cashier that needed to know the rules, not the scanning software. It may be, for example, that the process is capable, but Weiss's simply does not pay enough for the kind of quality talent it needs to remember and enforce those rules. Oh, we can create cheat sheets and reinforcements, but a pay scale that encourages retention might go pretty far.
I think that about wraps up the challenge for me - but if you have questions, leave comments, we can keep iterating on this forever.
What's Next?
Besides the occasional request for comments, I try not to ask for too much too often from my blog readers. If you enjoy Creative Chaos and think it has value for a larger audience, please consider registering as a reviewer for the Agile2009 Conference and reviewing my proposals. (To review, you'll have to create a free account and search for Heusser.)
If you're looking for some cutting edge ideas on the cutting room floor, reviewing Agile 2009 proposals is one way to see them. You see, the Agile 2009 submission process is entirely open. In theory, submitters get feedback they can use to improve the submission, along with feedback, for the momth of January - then, based on voting, the submissions are accepted or not.
Creative Chaos readers are a part of the "in theory." At least, I hope so.
Regards,
--heusser
Thursday, December 18, 2008
What has Matt been up to?
Well, a lot, but blogging hasn't been among them. Here's what's up:
The December issue of Software Test&Performance Magazine is out. You can download the issue here; our column, which is on test automation, begins on page 7.
I've joined the software-craftsmen Google group. I believe the Craftsperson movement has two great potentials: One, to improve the training of new developers, and two, to create social systems that could lead to better software and less technical debt over time.
I'm starting to work on proposals for Agile 2009 and have them in rough draft form. If you'd like to review them before I submit, please drop me an email: Matt.Heusser@gmail.com. I'd love to hear from you.
This week, O'Reilly approved the Beautiful Testing book project, which Adam Goucher asked me to contribute a chapter to. No, I will get no royalties; all royalties will be donated to purchase mosquito nets for rural Africa. And, no, I don't have a contract yet. Keep your fingers crossed.
Finally, and most importantly, I am working my tail off for Socialtext, who just released a new, free project to create social networks for people recently laid off.
So there you have it. A free magazine as a Christmas present, a request for peer review, a potential book project for a good cause, and lots and lots of software testing.
Want to get involved in any of it? Drop me a line. We can always use reviewers for the March Issue. And April, and May ...
The December issue of Software Test&Performance Magazine is out. You can download the issue here; our column, which is on test automation, begins on page 7.
I've joined the software-craftsmen Google group. I believe the Craftsperson movement has two great potentials: One, to improve the training of new developers, and two, to create social systems that could lead to better software and less technical debt over time.
I'm starting to work on proposals for Agile 2009 and have them in rough draft form. If you'd like to review them before I submit, please drop me an email: Matt.Heusser@gmail.com. I'd love to hear from you.
This week, O'Reilly approved the Beautiful Testing book project, which Adam Goucher asked me to contribute a chapter to. No, I will get no royalties; all royalties will be donated to purchase mosquito nets for rural Africa. And, no, I don't have a contract yet. Keep your fingers crossed.
Finally, and most importantly, I am working my tail off for Socialtext, who just released a new, free project to create social networks for people recently laid off.
So there you have it. A free magazine as a Christmas present, a request for peer review, a potential book project for a good cause, and lots and lots of software testing.
Want to get involved in any of it? Drop me a line. We can always use reviewers for the March Issue. And April, and May ...
Thursday, December 11, 2008
SideBar - II
The test challenge is my attempt to explain that skill, practice, and critical thinking - the things no defined process can create - matter in software development - in the same way they matter in Golf, Music, or Art.
That good testing, and good thinking, can be learned. I've only been doing this kind of "personal process improvement" for the past ten or eleven years - and talking about it publicly for even less.
When I want to talk about the joy I get from doing a project well, words fail me. Oh, I can try. This blog is filled with the attempt. And when people come along and say it better, I feel obligated to point out it.
This morning, my colleague Chris McMahon put out a twitter link to a video about skills improvement - and attitude - taken from Music, that I thought was absolutely wonderful. Here's the link.
When we talk about focusing, practicing, reflecting, and experimenting in something like Golf or Music, the ideas have common acceptance. Let's do the same with software.
More test challenge to come.
That good testing, and good thinking, can be learned. I've only been doing this kind of "personal process improvement" for the past ten or eleven years - and talking about it publicly for even less.
When I want to talk about the joy I get from doing a project well, words fail me. Oh, I can try. This blog is filled with the attempt. And when people come along and say it better, I feel obligated to point out it.
This morning, my colleague Chris McMahon put out a twitter link to a video about skills improvement - and attitude - taken from Music, that I thought was absolutely wonderful. Here's the link.
When we talk about focusing, practicing, reflecting, and experimenting in something like Golf or Music, the ideas have common acceptance. Let's do the same with software.
More test challenge to come.
Tuesday, December 09, 2008
New Testing Challenge - IV
So let's see here. We have a simple test project on it's face. Management hired us to figure out if the store if following a set of rules.
I hope you agree this is a real, critical thinking test problem. If we substitute "rules" for "requirements", we find this is very similar to a real test problem.
First of all, just because one cashier follows the rule is no /proof/ that the cashier will do so on the next transaction. Nor that the cashier next to her will do so.
For that matter, a typical grocery store has thousands of items. So finding out that ten or fifteen ring up at the correct price doesn't give us proof that the rest will be correct.
The best we can do is to say that the the process is capable of success - a term I borrowed for auditors a few years ago.
So what would I do? First of all, I'd walk into the store and look around. Yes, look around. Are the floors clean? Are the items stocked correctly? Are the lowest-paid workers, the baggers, wearing clean clothes with shirts that are tucked in? When the lines are down, do they go bring in carts from the parking lot, or do they sit around?
Then I'd go get my three cousins, Joe, who's 20 and looks 23, Billy, who's 21 and looks 25, and Sally, who's 22 and looks 18. Together, we would:
(A) Have Joe wring up a modest set of snack food and "forget" his id; she if he can buy beer. Have him use a different cashier, give his ID, and she if the cashier notice he just turned 20 last week and is 20, not 21. Try to argue with her about the math is she says 'no.' (Also, check the snack food for pricing)
(B) Have sally try the same trick with cigarettes.
(C) But a set of regular groceries, including items on sale, one from each major department. Carry a calculator with me, write down what I bought and the correct amount. Also buy two each of the items that should not be taxed (staples). Go to the checkout counter, and see if the correct amount is subtotaled, correct*0.05 is charged for tax, and sub+tax = total.
(D) Have the cousins try (C) on different shifts with different cashiers.
(E) Each of my and the cousins would try to buy beer at 6:10 AM, noon, 9:50PM, 10:10PM, and on several times on Sunday. We'd all try to buy non-food items on Sunday
(F) Clarify what tax should be applied on things that cost $0.19 (I think nothing) and $0.89 (I think four cents). Buy 100 items at $0.19 each, expect to pay $0.85.
(G) I'd try to buy cigarettes and beer; back then I appeared to be 45 years old. (Yes, I got younger. Hey, it's my story.)
I'd also try to get an appointment with the manager and see if we could have a friendly chat about the inspection, instead of doing it in a clandestine, us-vs-them way. I'd ask what systems he's put in place to educate and train the workers.
At this point, I'd have three types of feedback:
#1: I'd have looked and paid attention to the store (touring). There may be problems that senior management needs to be aware of outside the initial test. Also, a sloppy environment would tell me to spend more time looking carefully at the business rules themselves, because /some/ people at the store aren't doing their jobs.
#2: I'd have performed a first, sloppy pass at the initial requirements handed to me. If the store passed all those, I'd breathe a sigh of relief and start to wonder about the cashiers we didn't test. If some failed, I'd start looking and what failed to consider what to test on my next run.
#3: My walkthough or inspection with the manager would give me some idea of what the risks and weaknesses of the store might be - and it's strengths.
I would take the results of my initial round of testing and plan the next round. We can talk about that next time.
In the mean time
That was my sloppy first pass, and it's an honest first pass. Oh, perhaps I've given it more thought because I had to in order to /write/ the test, but it's not any kind of perfect, down-from-the-heavens answer.
Now's the part where you come in: You get to tell me what a terrible test it is, and what I've missed. Comments, anyone?
More to come.
I hope you agree this is a real, critical thinking test problem. If we substitute "rules" for "requirements", we find this is very similar to a real test problem.
First of all, just because one cashier follows the rule is no /proof/ that the cashier will do so on the next transaction. Nor that the cashier next to her will do so.
For that matter, a typical grocery store has thousands of items. So finding out that ten or fifteen ring up at the correct price doesn't give us proof that the rest will be correct.
The best we can do is to say that the the process is capable of success - a term I borrowed for auditors a few years ago.
So what would I do? First of all, I'd walk into the store and look around. Yes, look around. Are the floors clean? Are the items stocked correctly? Are the lowest-paid workers, the baggers, wearing clean clothes with shirts that are tucked in? When the lines are down, do they go bring in carts from the parking lot, or do they sit around?
Then I'd go get my three cousins, Joe, who's 20 and looks 23, Billy, who's 21 and looks 25, and Sally, who's 22 and looks 18. Together, we would:
(A) Have Joe wring up a modest set of snack food and "forget" his id; she if he can buy beer. Have him use a different cashier, give his ID, and she if the cashier notice he just turned 20 last week and is 20, not 21. Try to argue with her about the math is she says 'no.' (Also, check the snack food for pricing)
(B) Have sally try the same trick with cigarettes.
(C) But a set of regular groceries, including items on sale, one from each major department. Carry a calculator with me, write down what I bought and the correct amount. Also buy two each of the items that should not be taxed (staples). Go to the checkout counter, and see if the correct amount is subtotaled, correct*0.05 is charged for tax, and sub+tax = total.
(D) Have the cousins try (C) on different shifts with different cashiers.
(E) Each of my and the cousins would try to buy beer at 6:10 AM, noon, 9:50PM, 10:10PM, and on several times on Sunday. We'd all try to buy non-food items on Sunday
(F) Clarify what tax should be applied on things that cost $0.19 (I think nothing) and $0.89 (I think four cents). Buy 100 items at $0.19 each, expect to pay $0.85.
(G) I'd try to buy cigarettes and beer; back then I appeared to be 45 years old. (Yes, I got younger. Hey, it's my story.)
I'd also try to get an appointment with the manager and see if we could have a friendly chat about the inspection, instead of doing it in a clandestine, us-vs-them way. I'd ask what systems he's put in place to educate and train the workers.
At this point, I'd have three types of feedback:
#1: I'd have looked and paid attention to the store (touring). There may be problems that senior management needs to be aware of outside the initial test. Also, a sloppy environment would tell me to spend more time looking carefully at the business rules themselves, because /some/ people at the store aren't doing their jobs.
#2: I'd have performed a first, sloppy pass at the initial requirements handed to me. If the store passed all those, I'd breathe a sigh of relief and start to wonder about the cashiers we didn't test. If some failed, I'd start looking and what failed to consider what to test on my next run.
#3: My walkthough or inspection with the manager would give me some idea of what the risks and weaknesses of the store might be - and it's strengths.
I would take the results of my initial round of testing and plan the next round. We can talk about that next time.
In the mean time
That was my sloppy first pass, and it's an honest first pass. Oh, perhaps I've given it more thought because I had to in order to /write/ the test, but it's not any kind of perfect, down-from-the-heavens answer.
Now's the part where you come in: You get to tell me what a terrible test it is, and what I've missed. Comments, anyone?
More to come.
Process over People - I
More testing challenge /today/. This sidebar will not continue to interrupt our regular broadcast. That said ...
BY THE WAY: If you've been reading Creative Chaos for long, you know that I struggle with the over-value-ification of prescriptive process. I've published articles on it and blogged on it extensively. Then, every now and again, someone will just completely nail it. My friend David Christiansen just posted "The FSOP Cycle". It's brilliant. Go read it. Now.
About the same time, Paul Graham published "Artists Ship", which covers some of the same ground in an amazing way. These are send-to-your-boss level articles, and impressive ones, at that.
This morning I was twittering with Ben Simo and he pointed out, again, that over-respect for process dis-respects people. I do think I have the quick soundbyte of why that is:
"The process" says the next step is X. You have a better idea. What does "the process" tell you do? Do X anyway.
In other words, the process doesn't trust you.
Now, a process expert will tell you that the process is insurance. You pay a cost for a check now and decrease the damage that you'll get burned later. And, things like testing and inspections can indeed provide some value. Mandated steps, however, insist that someone who wasn't on this project thought would be a good idea last year and wrote down is the best choice right now.
If process in insurance, check the cost of your premiums. It might be better to downgrade from comprehensive coverage to just liability.
BY THE WAY: If you've been reading Creative Chaos for long, you know that I struggle with the over-value-ification of prescriptive process. I've published articles on it and blogged on it extensively. Then, every now and again, someone will just completely nail it. My friend David Christiansen just posted "The FSOP Cycle". It's brilliant. Go read it. Now.
About the same time, Paul Graham published "Artists Ship", which covers some of the same ground in an amazing way. These are send-to-your-boss level articles, and impressive ones, at that.
This morning I was twittering with Ben Simo and he pointed out, again, that over-respect for process dis-respects people. I do think I have the quick soundbyte of why that is:
"The process" says the next step is X. You have a better idea. What does "the process" tell you do? Do X anyway.
In other words, the process doesn't trust you.
Now, a process expert will tell you that the process is insurance. You pay a cost for a check now and decrease the damage that you'll get burned later. And, things like testing and inspections can indeed provide some value. Mandated steps, however, insist that someone who wasn't on this project thought would be a good idea last year and wrote down is the best choice right now.
If process in insurance, check the cost of your premiums. It might be better to downgrade from comprehensive coverage to just liability.
Monday, December 08, 2008
Barriers to Agile Adoption
I just posted this to the agile-testing list; I thought it was worth sharing. More test challenge to come!
//Begin Post
I have seen many organizations try to embrace agile, compromise, and fail. ( See "Big Agile Up Front") That's not a failure in Agile; it's something like what /I suspect/ Ron would call "We tried baseball, and it didn't work."
As I see it, several problems mash together here:
(1) It turns out the problem of customizing a methodology introduces a lot of unintended consequences. And no, doing it "by the book" won't eliminate the problem; instead you'll get someone /elses/ intended consequences. Will those match your values? Who knows?
So tailoring knowledge work requires a great deal of abstract thinking, modeling skills, experience and team buy-in - which is what I try to focus on in my work. Yet few authors/speakers address the space of customization and dealing with change in a meaningful way. Of, you've got Diana Larsen and Dale Emery and a few others, but I think there is opportunity here for us (as the agile community) to do more.
That sounds like a great proposal for a talk or Six at Agile 2009.
I'm just saying ...
(2) Many organizations live in areas that are not next to a world-class CS school, peg salaries at 50% of market average /for the area/, have work that isn't all that interesting, and won't pay for relocation. I'm not sure I have a fix for this; they've designed their own kind of special cocktail. A superb kind of technical leader can /help/ pull and organization like this out of the mire, as can superb individual contributors.
My fix for this is invariably, to offer telecommuting, and you can hire the world for a fraction of the cost.
(3) North America is locked in the same command-and-control mindset that won us WWII and made Ford a Billionaire. It's kind of hard to fault that. The only problem is that way of thinking is about fifty years out of date and can no long compete globally.
(4) Most American employees work inside a system that rewards certain types of behaviors. When people behave in the way they are rewarded (or have been in the past for years), can we really blame them?
These are just a few of many barriers to agile adoption, just as GM and Ford and Chrysler have struggled to adopt continuous improvement and anything like the Toyota Production System on the assembly line.
Or, to quote Lee Copeland: "If you measure the wrong thing, and you reward the wrong thing, don't be surprised if you get the wrong thing."
How we deal with that is up to us.
UPDATE 1: Hey, I'll be in Detroit, Jan 14th, speaking at the Great Lakes Software Process Improvement Network on just this subject! You can't beat it with a stick!
UPDATE 2: Personally, I find it easy to trade off my personal process, and I find it pleasurable. Above, when I say "freakishly hard", I mean the cognitive process of taking something like Scrum or XP, adapting it to an organization's context, making compromises to suit the VP of Sales, the VP of operations, and the director of engineering - then trying to roll it out to a large organization. The unintended consequences of those tradeoffs tend to get you. My ideas on alternatives are what I hope to contribute to the panel discussion at GL-SPIN in January.
//Begin Post
I have seen many organizations try to embrace agile, compromise, and fail. ( See "Big Agile Up Front") That's not a failure in Agile; it's something like what /I suspect/ Ron would call "We tried baseball, and it didn't work."
As I see it, several problems mash together here:
(1) It turns out the problem of customizing a methodology introduces a lot of unintended consequences. And no, doing it "by the book" won't eliminate the problem; instead you'll get someone /elses/ intended consequences. Will those match your values? Who knows?
So tailoring knowledge work requires a great deal of abstract thinking, modeling skills, experience and team buy-in - which is what I try to focus on in my work. Yet few authors/speakers address the space of customization and dealing with change in a meaningful way. Of, you've got Diana Larsen and Dale Emery and a few others, but I think there is opportunity here for us (as the agile community) to do more.
That sounds like a great proposal for a talk or Six at Agile 2009.
I'm just saying ...
(2) Many organizations live in areas that are not next to a world-class CS school, peg salaries at 50% of market average /for the area/, have work that isn't all that interesting, and won't pay for relocation. I'm not sure I have a fix for this; they've designed their own kind of special cocktail. A superb kind of technical leader can /help/ pull and organization like this out of the mire, as can superb individual contributors.
My fix for this is invariably, to offer telecommuting, and you can hire the world for a fraction of the cost.
(3) North America is locked in the same command-and-control mindset that won us WWII and made Ford a Billionaire. It's kind of hard to fault that. The only problem is that way of thinking is about fifty years out of date and can no long compete globally.
(4) Most American employees work inside a system that rewards certain types of behaviors. When people behave in the way they are rewarded (or have been in the past for years), can we really blame them?
These are just a few of many barriers to agile adoption, just as GM and Ford and Chrysler have struggled to adopt continuous improvement and anything like the Toyota Production System on the assembly line.
Or, to quote Lee Copeland: "If you measure the wrong thing, and you reward the wrong thing, don't be surprised if you get the wrong thing."
How we deal with that is up to us.
UPDATE 1: Hey, I'll be in Detroit, Jan 14th, speaking at the Great Lakes Software Process Improvement Network on just this subject! You can't beat it with a stick!
UPDATE 2: Personally, I find it easy to trade off my personal process, and I find it pleasurable. Above, when I say "freakishly hard", I mean the cognitive process of taking something like Scrum or XP, adapting it to an organization's context, making compromises to suit the VP of Sales, the VP of operations, and the director of engineering - then trying to roll it out to a large organization. The unintended consequences of those tradeoffs tend to get you. My ideas on alternatives are what I hope to contribute to the panel discussion at GL-SPIN in January.
Friday, December 05, 2008
Sidebar -
Before finishing off the test challenge -
The December Issue of Software Test & Performance Magazine is out. The theme is on test automation, and our monthly column is on fundamental issues in test automation. IT's on page seven; you can download a free PDF here or subscribe here to the print edition.
If you want to see more announcements of this type on this blog, (word on the street is that James Christie has an article in this quarter's testing experience magazine) - let me know through comments. If you don't, let me know.
Next time: Comments on your comments, and how I would approach the testing challenge.
UPDATE: I read James Christie's article. It's good - especially so if you work in a large, bureaucratic organization trying to follow the (*cough*) v-model. (at least, trying to follow it on paper) :-)
The December Issue of Software Test & Performance Magazine is out. The theme is on test automation, and our monthly column is on fundamental issues in test automation. IT's on page seven; you can download a free PDF here or subscribe here to the print edition.
If you want to see more announcements of this type on this blog, (word on the street is that James Christie has an article in this quarter's testing experience magazine) - let me know through comments. If you don't, let me know.
Next time: Comments on your comments, and how I would approach the testing challenge.
UPDATE: I read James Christie's article. It's good - especially so if you work in a large, bureaucratic organization trying to follow the (*cough*) v-model. (at least, trying to follow it on paper) :-)
Wednesday, December 03, 2008
New Testing Challenge - III
I addressed some clarifying questions yesterday, but I thought of another one: What kind of answer are you looking for?
Most of the answers we've had so far have been comprehensive - "here are all the test cases I would run." Those tend to take a very long time to do, and several people have replied to me with something like "oh, I don't have time."
Well, that's fine. Two other kinds of answers could be short "here's a summary of the areas I'd look into" or immediate "here's what I would do /first/, and I would use feedback to guide my direction."
If you give a short answer, I'm not going to say "aha! You missed this!" that would be counter to my intentions and not be to your benefit.
I'm looking for a wide variety of answers. Summary, Immediate, or Comprehensive are all fine. One more chance ...
UPDATE: Alan Page's "How We Test Software At Microsoft" is supposed to be available by pre-order from Amazon today. If you want something to buy your boss for Christmas, you could get either that or Weinberg's "Perfect Software And Other Illusions of Software Testing", which is in stock right now.
Most of the answers we've had so far have been comprehensive - "here are all the test cases I would run." Those tend to take a very long time to do, and several people have replied to me with something like "oh, I don't have time."
Well, that's fine. Two other kinds of answers could be short "here's a summary of the areas I'd look into" or immediate "here's what I would do /first/, and I would use feedback to guide my direction."
If you give a short answer, I'm not going to say "aha! You missed this!" that would be counter to my intentions and not be to your benefit.
I'm looking for a wide variety of answers. Summary, Immediate, or Comprehensive are all fine. One more chance ...
UPDATE: Alan Page's "How We Test Software At Microsoft" is supposed to be available by pre-order from Amazon today. If you want something to buy your boss for Christmas, you could get either that or Weinberg's "Perfect Software And Other Illusions of Software Testing", which is in stock right now.
Tuesday, December 02, 2008
New Testing Challenge - II
Have you ever played pac-man? In pac-man, there is a simple, ostensible game: Eat the dots before any of the ghosts get you. However, there is a hidden game - a game-within-the-game - that is - figure out what makes the ghosts change direction. If you can do that, you can 'trick' the ghosts into going the wrong way, and, suddenly, the regular game gets much easier.
Software Development has it's own meta-games: Good project and product management can help us define success in more concrete terms, which makes it a lot easier to hit between the goal posts.
Does testing have a meta-game? I think it does, and I think it can be explained, described, and trained on.
Last week I posted a New Testing Challenge. On first blush, it's a simple problem: Make sure that the following list of business rules are enforced at a local grocery store.
I to be super-tempting for the reader to dive into problem-solving the surface problem. Why, first off, you test each of the business rules individually. Then you look for interesting bounds conditions, then you look for combinations of rules ... and so on.
And there's nothing really wrong with that - it's the eat-the-dots problem. Sometimes, you'll be presented with a "just test it!" sort of statement in the field, and knowing how to generalize to explicit rules and test those rules (and combinations) is a genuine skill. I would argue that when pushed to "just test it!", a good exploratory tester can usually come up with a dozen quick, easy-to-run test cases ideas, to make it look like they are testing, to buy time to figure out the meta-game.
Yes, the Meta-Game. The Meta-Game is figuring out who the customer is, what matters to him, and how much time, effort, and energy he is willing to invest in testing.
Knowing the customer helps you figure out what to test.
It helps you triage test ideas.
It helps you come up with test ideas.
It helps you know when to stop.
So, when presented with something like my testing challenge, a meta-game player will trying to engage someone - preferably the customer or product owner - in a conversation, along these lines:
"Who is the customer?"
"What problems are they worried about?"
"What is your expectation of 'good enough'?"
"How much time/money do you want me to invest?"
"Tell me about the project?"
Notice I say a conversation - because the customer may have uninformed expectations about estimates and quality expectations. It'll be a good bit of give and take - probably more than one round of it. I was very pleased to see Michael Bolton as the first responder, but even more pleased that he tried to actively engage me in the meta-game. I'll try to generalize the answers for everyone's benefit below.
The Answers
In the story below, you've been brought in as a consultant by Weiss's Management. They have recently had a number of complaints about just one store - the one in Frederick, and gave management a month to clean up the ship before calling you. The complaints are around the laws. One person noticed the store selling beer on a Sunday and cigarettes to minors, another complained because could not buy beer at 9:30 (something about daylight savings time and a clock), a third complained because he was charged tax on bread. A fourth complained that the store would not honor a sale price listed in the weekly circular.
Management wants to get these complaints resolved before they escalate to the newspaper or worse, the state attorney general's office. You live about six miles from the store, and have about a month to provide a written summary of known, systematic training issues along with recommendations for corrective action.
They expect you to work on this part-time, about 10 hours a week; your total budget is $1,000 US Dollars. (This is about $25/hour, which, in 1982 dollars, is reasonable consulting rates for a grocery store.) You live about 2 miles up the road from Weiss's and can also expense mileage.
What to do tomorrow
Ok, I've given what I believe is a good description of the problem. We could have some more give-and-take conversation, but I haven't intentionally left out any spoilers. You've let the Avante-Garde Michael Bolton types take a swing at this. Go back to Maryland in 1982. You just got out of your car and are headed into Weiss's Market. (Or, you can be at home planning, it's up to you.) What would you do?
Finally
If you look at the initial blog post as "requirements" to be "tested", I would say my initial description of the problem was probably at or above industry average. But it did not tell you any of the why that would guide your testing. Many such documents lack this context - and notice that you can not have a conversation with a document.
The way any organizations do development, it's hard to impossible to play the testing meta-game I've described above. That is just sad. The ghosts'll getcha every time. Or, to put it more plainly: without the meta-game, we are much more likely to find a bunch of bugs that don't matter and get called into the big meeting where the boss says "Why didn't QA find (the one that did matter)?"
Software Development has it's own meta-games: Good project and product management can help us define success in more concrete terms, which makes it a lot easier to hit between the goal posts.
Does testing have a meta-game? I think it does, and I think it can be explained, described, and trained on.
Last week I posted a New Testing Challenge. On first blush, it's a simple problem: Make sure that the following list of business rules are enforced at a local grocery store.
I to be super-tempting for the reader to dive into problem-solving the surface problem. Why, first off, you test each of the business rules individually. Then you look for interesting bounds conditions, then you look for combinations of rules ... and so on.
And there's nothing really wrong with that - it's the eat-the-dots problem. Sometimes, you'll be presented with a "just test it!" sort of statement in the field, and knowing how to generalize to explicit rules and test those rules (and combinations) is a genuine skill. I would argue that when pushed to "just test it!", a good exploratory tester can usually come up with a dozen quick, easy-to-run test cases ideas, to make it look like they are testing, to buy time to figure out the meta-game.
Yes, the Meta-Game. The Meta-Game is figuring out who the customer is, what matters to him, and how much time, effort, and energy he is willing to invest in testing.
Knowing the customer helps you figure out what to test.
It helps you triage test ideas.
It helps you come up with test ideas.
It helps you know when to stop.
So, when presented with something like my testing challenge, a meta-game player will trying to engage someone - preferably the customer or product owner - in a conversation, along these lines:
"Who is the customer?"
"What problems are they worried about?"
"What is your expectation of 'good enough'?"
"How much time/money do you want me to invest?"
"Tell me about the project?"
Notice I say a conversation - because the customer may have uninformed expectations about estimates and quality expectations. It'll be a good bit of give and take - probably more than one round of it. I was very pleased to see Michael Bolton as the first responder, but even more pleased that he tried to actively engage me in the meta-game. I'll try to generalize the answers for everyone's benefit below.
The Answers
In the story below, you've been brought in as a consultant by Weiss's Management. They have recently had a number of complaints about just one store - the one in Frederick, and gave management a month to clean up the ship before calling you. The complaints are around the laws. One person noticed the store selling beer on a Sunday and cigarettes to minors, another complained because could not buy beer at 9:30 (something about daylight savings time and a clock), a third complained because he was charged tax on bread. A fourth complained that the store would not honor a sale price listed in the weekly circular.
Management wants to get these complaints resolved before they escalate to the newspaper or worse, the state attorney general's office. You live about six miles from the store, and have about a month to provide a written summary of known, systematic training issues along with recommendations for corrective action.
They expect you to work on this part-time, about 10 hours a week; your total budget is $1,000 US Dollars. (This is about $25/hour, which, in 1982 dollars, is reasonable consulting rates for a grocery store.) You live about 2 miles up the road from Weiss's and can also expense mileage.
What to do tomorrow
Ok, I've given what I believe is a good description of the problem. We could have some more give-and-take conversation, but I haven't intentionally left out any spoilers. You've let the Avante-Garde Michael Bolton types take a swing at this. Go back to Maryland in 1982. You just got out of your car and are headed into Weiss's Market. (Or, you can be at home planning, it's up to you.) What would you do?
Finally
If you look at the initial blog post as "requirements" to be "tested", I would say my initial description of the problem was probably at or above industry average. But it did not tell you any of the why that would guide your testing. Many such documents lack this context - and notice that you can not have a conversation with a document.
The way any organizations do development, it's hard to impossible to play the testing meta-game I've described above. That is just sad. The ghosts'll getcha every time. Or, to put it more plainly: without the meta-game, we are much more likely to find a bunch of bugs that don't matter and get called into the big meeting where the boss says "Why didn't QA find (the one that did matter)?"
Subscribe to:
Posts (Atom)