I've re-titled the last Estimates III as "Models - I", as it really deserves it's own serious.
Pressing on ...
(AKA - Stupid Project Management Tricks; a True Story)
The code is big stinking mess, all over the floor. The team is dependent on a vendor who hasn't delivered a good build to us yet. We have no idea when anything is going to be ready. My best bet is three months out, but nobody really knows. Senior Management is Antsy. What will the project manager say?
Ok, so I wasn't there, but I know what happens: He walks into the big office and says "THE PROJECT WILL BE DONE IN FIVE MONTHS."
Where did that number come from? I have no idea. There was probably a gannt chart somewhere that added up to five months, but nobody really knew, and nobody technical trusted the vendor.
All I do know is that senior management went home that night - and they slept well, because they had a date. A firm commitment. None of this "gee, we don't really know, how long does it take to find your lost keys?" business. A firm date.
It was bogus, but, whatever. They could sleep.
... time passes ...
Four and a half months later, and it becomes obvious that the team won't deliver in two weeks. I've been pulled off to work on another, different death march(*), but I take a general, cynical tester's interest in the project.
Again, the word comes down from on high "WE'VE HAD ANOTHER SLIP. THE PROJECT WILL SHIP IN FOUR MONTHS ..."
Of course, four months goes by ...
Eventually, the project was cancelled: Over a year late.
Now, I criticize a lot of things about that project. 99% of the time, I think the project management was irresponsible, incompetant, and (possibly) unethical.
Then again, 99% of the time, senior management was able to sleep at night, because, dang nab it, they had a date. None of this "uncertainty" business.
Which reminds me of an old quote by DeMarco/Lister - all too often, people would rather be certain and wrong than uncertain. After all, it's better to be a team player than be negative.
So, the basic technique is this: If you have no idea when you'll be done, make up a date so far in the future that you've got a good shot. When it becomes obvious that you cannot hit that date, well ... make up a new one.
Or, in other words: When uncertainty is politically impossible, do what gets you rewards.
Like I said before, 99% of the time, I think this is an incompetant and bogus way to "lead" projects. But that 1% of the time, I think: Man, if you have no idea when it's going to be done, making up a date real far away and slipping if you have to is a decent approach. (It can't be too far out - otherwise you'll get push back.)
All of that presupposes that you actually can ship at all. The problem in the story is that we couldn't, and anyone with decent systems thinking skills could see that, advise management, and would, er, well ...
... "find other projects."
--heusser
(*) - Actually, to be honest, that one was a relief; it was a death march, and I worked massive overtime, and it was painful, AND IT SHIPPED.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Tuesday, July 31, 2007
Monday, July 30, 2007
Models - I
AKA – Tripping through physical models of the universe(*)
Aristotle proposed a model of the universe with earth at the center and the heavens revolving around it. This model could not explain why days were longer in the summer, or the seasons, but it could predict the behavior of the moon – and the heavenly bodies seemed to support it.
Copernicus and Galileo changed the model, and insisted that the earth rotates the sun, as do Mars, Venus, and the other planets.
The only problem is, well, it didn’t work. Copernicus thought that the earth traveled in a circle, and it actually travels in an ellipse. Gelileo’s observations could confirm the theory in some aspects, but in others, the math wasn’t quite right. It wasn’t until Newton invents the calculus that was get the planets traveling in an ellipse and equations good enough to predict behavior.
That is, er … good enough to predict behavior most of the time. Objects that were very small or very fast tended to be "off" of what Newton’s equations would suggest. Still, most things here on planet earth fit well into Newton’s methods; it wouldn’t be until Einstein that we figure out the equations to calculate space-time distortion for objects as they approach the speed of light.
That is more than the brief outline of scientific history – it is the story of the evolution of a model. At each level, the model becomes more precise, more detailed, more formal, more "correct", and a better way to predict (or analyze) the behavior of objects(**). Also, that more "correct" models tend to take more time and effort to learn, understand, and master, and that a model can be “good enough” for the person solving it. Third Century British Farmers didn’t need Newton to raise crops, and High School students in Physics I don’t need Einstein to predict the distance a cannonball will fire.
Now, I say that lower-level models take more work and more variables. In a hierarchal organization, the higher levels do not have the time or the attention for those details; they are managing many, many more projects in a stack. The test project that is your life's work is only a "phase" to the project manager, and one part of one project to the PMO. To the CIO the whole project is a "tactical objective." To do the detailed model well, you need a lot of variables; the CIO doesn’t know them at that level, and the detailed middle-manager probably doesn’t have the authority to answer them definitively.
So we end up having these conversations where someone "just" wants a single number to put on a bullet-point on a powerpoint slide, and we say "it depends, what about ..." and try to get the variables for the model. It can be painful.
Here’s one technique I have used with some success: Ask for the margin of error. When you’re told that it must be very precise (+ or – 5 days, for example), physically roll up your sleeves and say something like "ok, let’s get to work." Then plan a half-day session to hammer the thing out, with the key players involved.
Building an accurate predictive model is, well, relatively impossible(**); ask anyone who plays the stock market. Far too often, we expect precise, Einstein-like answers when we only have the data of 3rd Century Farmer.
Perhaps, then, we should estimate like a 3rd Century Farmer "We expect to harvest at the end of 3rd quarter; it depends on the weather ..."
Of course, there are other tricks - both for estimates and for models. More later.
--heusser
(*) - We use models and prediction every day. When we say that "Bob is a jerk, he won't help", we are making a stereotype, or model, of Bob. It may predict his behavior, and may even do so accurately, but it could be that Bob isn't a jerk at all. He might be over-worked and going through multiple deaths in his family. As we improve our understanding of the world, we can 'model' it better - these are informal models, and they are far more common than formal ones with mathematical rigor.
(**) – "predict" is a funny word. Netwon’s equations allow me to predict the speed that an object will hit the ground if I know the height it fell from, but I am not suggesting that I have a crystal ball and can predict the future. Any project will have risks, and if they all come true at the same time, well – forget about it. The point is that more analysis allows us to make better educated guesses.
Aristotle proposed a model of the universe with earth at the center and the heavens revolving around it. This model could not explain why days were longer in the summer, or the seasons, but it could predict the behavior of the moon – and the heavenly bodies seemed to support it.
Copernicus and Galileo changed the model, and insisted that the earth rotates the sun, as do Mars, Venus, and the other planets.
The only problem is, well, it didn’t work. Copernicus thought that the earth traveled in a circle, and it actually travels in an ellipse. Gelileo’s observations could confirm the theory in some aspects, but in others, the math wasn’t quite right. It wasn’t until Newton invents the calculus that was get the planets traveling in an ellipse and equations good enough to predict behavior.
That is, er … good enough to predict behavior most of the time. Objects that were very small or very fast tended to be "off" of what Newton’s equations would suggest. Still, most things here on planet earth fit well into Newton’s methods; it wouldn’t be until Einstein that we figure out the equations to calculate space-time distortion for objects as they approach the speed of light.
That is more than the brief outline of scientific history – it is the story of the evolution of a model. At each level, the model becomes more precise, more detailed, more formal, more "correct", and a better way to predict (or analyze) the behavior of objects(**). Also, that more "correct" models tend to take more time and effort to learn, understand, and master, and that a model can be “good enough” for the person solving it. Third Century British Farmers didn’t need Newton to raise crops, and High School students in Physics I don’t need Einstein to predict the distance a cannonball will fire.
Now, I say that lower-level models take more work and more variables. In a hierarchal organization, the higher levels do not have the time or the attention for those details; they are managing many, many more projects in a stack. The test project that is your life's work is only a "phase" to the project manager, and one part of one project to the PMO. To the CIO the whole project is a "tactical objective." To do the detailed model well, you need a lot of variables; the CIO doesn’t know them at that level, and the detailed middle-manager probably doesn’t have the authority to answer them definitively.
So we end up having these conversations where someone "just" wants a single number to put on a bullet-point on a powerpoint slide, and we say "it depends, what about ..." and try to get the variables for the model. It can be painful.
Here’s one technique I have used with some success: Ask for the margin of error. When you’re told that it must be very precise (+ or – 5 days, for example), physically roll up your sleeves and say something like "ok, let’s get to work." Then plan a half-day session to hammer the thing out, with the key players involved.
Building an accurate predictive model is, well, relatively impossible(**); ask anyone who plays the stock market. Far too often, we expect precise, Einstein-like answers when we only have the data of 3rd Century Farmer.
Perhaps, then, we should estimate like a 3rd Century Farmer "We expect to harvest at the end of 3rd quarter; it depends on the weather ..."
Of course, there are other tricks - both for estimates and for models. More later.
--heusser
(*) - We use models and prediction every day. When we say that "Bob is a jerk, he won't help", we are making a stereotype, or model, of Bob. It may predict his behavior, and may even do so accurately, but it could be that Bob isn't a jerk at all. He might be over-worked and going through multiple deaths in his family. As we improve our understanding of the world, we can 'model' it better - these are informal models, and they are far more common than formal ones with mathematical rigor.
(**) – "predict" is a funny word. Netwon’s equations allow me to predict the speed that an object will hit the ground if I know the height it fell from, but I am not suggesting that I have a crystal ball and can predict the future. Any project will have risks, and if they all come true at the same time, well – forget about it. The point is that more analysis allows us to make better educated guesses.
Wednesday, July 25, 2007
Estimates - II
As usual, some of my commenter’s have written my article for me ...
Seriously, Ben Simo points out that is it always possible to give an estimate, but that all estimates are wrong (if they were right, they would be commitments). Shrini points out that we get lousy requirements for our estimates. For example, when asked:
"When will it be done?"
You probably have a vague and fluffy definition of "IT" and a vague or unrealistic definition of "done."
For example, let me re-translate:
"When will it be done?"
Could be:
"How long will it take you to do everything I've written down here, with scope creep that _I_ deem reasonable, without a single defect or flaw - given that anything over a month is too long?"
Yikes!
Of course, I promised some actual answers. So here goes ...
The "GAP" between what the software actually IS and how it was analyzed is a serious pain point in software development. That gap widens with time - so my first suggestion is to never run a project more than about a month. (Yes, require/dev/test/prod in thirty days.) Schedule any "big" project as a series of small ones.
A worst, you might be two weeks late. That's not the end of the world.
If you have to run longer than a month, then periodically bring your code up to production quality. Develop features end-to-end, in thin slices, and add them to the system.
That's not really an estimating hint - it's a project organization hint. Estimates of less than a month are drastically easier than more. Moreover, they are small enough that an employee can feel a real personal commitment to the date, and work a little overtime to make it. (On a ten-month project, once you realize you're late, overtime will just kill you, not bring the project back on track.) So you organize the project so you never have to estimate more than a month at a time.
Plus, you can prioritize the customer's feature requests, so you build the most important thing first.
So what if that is not an option?
Ok, next idea. Do two types of estimates - realistic and pessimistic. Make the realistic estimate your goal date, and make the pessimistic a commitment back to the business. Within the team, talk about the goal; with management, talk about the commitment.
A third - Estimates can be done in two different ways. The first is an analysis task where break down the features into chunks and add up the chunks, or you take the spec and do some math to come up with an approximate number of test cases. The second way to do estimates is to actually start doing the work - at a very high level. Go ahead and _DO_ the design, or start to decompose the test cases, or write "Tracer bullet" objects to do the highest levels of disk, screen, and database interaction. Leave comments or "TODO" markers at where the next step will be. When you've finished that, come back to the TODO markers, estimate them, and add them up. Finally, add a little extra for risk - the bigger the risk, the bigger the buffer.
The thing is, the further you go into the project, the more you'll know. I have one colleague who simply avoids making any commitments or estimates at all. He'll say things like "if you say so" or "that's certainly our goal" but never make a promise. I'm not advocating that - but certainly the further you get into the project, the more accurate your dates will be.
(In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)
Looking back, most of this is not how you should do the estimates - but how to communicate them so they are not abused or mis-understood. This can be very challenging; as my old friend Eric likes to say "Sooner or later your three paragraphs of very specific if-thens and assumptions are going to turn into a PowerPoint bullet with a date on it. Tread lightly."
If that's the case, then you have a few options. One is to hit any arbitrary date by structuring the project so that you can go to production every month. Another is to make no commitments and blame someone else. A third is to way is to continually and consistently communicate risk-adjusted dates back to your customers.
My final thought is to express the cost of every change. It's usually not the project that kills us - it's the feature changes at the end. And that's fine - it's good - it makes us provide software that has better fitness for use. When that happens, though, either make the cost explicit (by adding the feature and insisting on moving the date) or pay for it with your nights and weekends.
The choice is yours ....
Seriously, Ben Simo points out that is it always possible to give an estimate, but that all estimates are wrong (if they were right, they would be commitments). Shrini points out that we get lousy requirements for our estimates. For example, when asked:
"When will it be done?"
You probably have a vague and fluffy definition of "IT" and a vague or unrealistic definition of "done."
For example, let me re-translate:
"When will it be done?"
Could be:
"How long will it take you to do everything I've written down here, with scope creep that _I_ deem reasonable, without a single defect or flaw - given that anything over a month is too long?"
Yikes!
Of course, I promised some actual answers. So here goes ...
The "GAP" between what the software actually IS and how it was analyzed is a serious pain point in software development. That gap widens with time - so my first suggestion is to never run a project more than about a month. (Yes, require/dev/test/prod in thirty days.) Schedule any "big" project as a series of small ones.
A worst, you might be two weeks late. That's not the end of the world.
If you have to run longer than a month, then periodically bring your code up to production quality. Develop features end-to-end, in thin slices, and add them to the system.
That's not really an estimating hint - it's a project organization hint. Estimates of less than a month are drastically easier than more. Moreover, they are small enough that an employee can feel a real personal commitment to the date, and work a little overtime to make it. (On a ten-month project, once you realize you're late, overtime will just kill you, not bring the project back on track.) So you organize the project so you never have to estimate more than a month at a time.
Plus, you can prioritize the customer's feature requests, so you build the most important thing first.
So what if that is not an option?
Ok, next idea. Do two types of estimates - realistic and pessimistic. Make the realistic estimate your goal date, and make the pessimistic a commitment back to the business. Within the team, talk about the goal; with management, talk about the commitment.
A third - Estimates can be done in two different ways. The first is an analysis task where break down the features into chunks and add up the chunks, or you take the spec and do some math to come up with an approximate number of test cases. The second way to do estimates is to actually start doing the work - at a very high level. Go ahead and _DO_ the design, or start to decompose the test cases, or write "Tracer bullet" objects to do the highest levels of disk, screen, and database interaction. Leave comments or "TODO" markers at where the next step will be. When you've finished that, come back to the TODO markers, estimate them, and add them up. Finally, add a little extra for risk - the bigger the risk, the bigger the buffer.
The thing is, the further you go into the project, the more you'll know. I have one colleague who simply avoids making any commitments or estimates at all. He'll say things like "if you say so" or "that's certainly our goal" but never make a promise. I'm not advocating that - but certainly the further you get into the project, the more accurate your dates will be.
(In Rapid Development, Steve McConnell actually suggests that you communicate estimates in a range that gets smaller over time, with the larger number first. "Six to four months" sounds strange - but if you say the smaller number first, people tend to forget the larger one.)
Looking back, most of this is not how you should do the estimates - but how to communicate them so they are not abused or mis-understood. This can be very challenging; as my old friend Eric likes to say "Sooner or later your three paragraphs of very specific if-thens and assumptions are going to turn into a PowerPoint bullet with a date on it. Tread lightly."
If that's the case, then you have a few options. One is to hit any arbitrary date by structuring the project so that you can go to production every month. Another is to make no commitments and blame someone else. A third is to way is to continually and consistently communicate risk-adjusted dates back to your customers.
My final thought is to express the cost of every change. It's usually not the project that kills us - it's the feature changes at the end. And that's fine - it's good - it makes us provide software that has better fitness for use. When that happens, though, either make the cost explicit (by adding the feature and insisting on moving the date) or pay for it with your nights and weekends.
The choice is yours ....
Tuesday, July 24, 2007
I've been busy ...
Last weekend I went to BarCampGrandRapids2, which turned out to be a wonderful experience. Dave Brondsma and Carlus Henry put together a great little un-conference; jointly organized, show up and present, if people come to your talk, great, if not, don't worry about it.
The conference was sponsored by Gordon Food Service (free food), Calvin College (free space), and Google, which provided a speaker, a recruiter, and a few give-aways.
The most interesting talk I went to was probably Matt Michielsen's "Industrial Automation for Techies" - learning about programmable logic controllers and such. Second to that would be the open spaces, where we talked about everything from SOA to the impact of universal GPS/GIS. No Geocaching, but maybe next time.
The weirdest thing for me was interacting with the attendees. We're all a bunch of computer nerds, so a lot of people were just sort of quiet and passive. I found that I kept falling back on where do you work / what do you do / here's my card / what's the hiring situation kind of discussions. Nothing wrong with that, but it's hard to get deep.
Now BarCampGR2 was on a Saturday, and it drew a totally different crowd than usual. This was no during-the-work-week junket at a posh resort. Attendees were computer enthusiasts; people who do this stuff because we love it - or who love this stuff and wish they could do it for a living. There was a talk on robotics. Instead of asking "what do you do?" a better question for something like BarCamp is probably "What do you care about?"
In the mean time, I am also reviewing a book manuscript for Addison-Wesley on the process of evolutionary software development, learning to play golf (badly), and I owe a few blog posts on estimates.
More to come. Also, if you'd like to preview a couple chapters of the Scott Bain book on evolutionary development, let me know. If it's popular enough, I have permission to post (a little bit of it).
The conference was sponsored by Gordon Food Service (free food), Calvin College (free space), and Google, which provided a speaker, a recruiter, and a few give-aways.
The most interesting talk I went to was probably Matt Michielsen's "Industrial Automation for Techies" - learning about programmable logic controllers and such. Second to that would be the open spaces, where we talked about everything from SOA to the impact of universal GPS/GIS. No Geocaching, but maybe next time.
The weirdest thing for me was interacting with the attendees. We're all a bunch of computer nerds, so a lot of people were just sort of quiet and passive. I found that I kept falling back on where do you work / what do you do / here's my card / what's the hiring situation kind of discussions. Nothing wrong with that, but it's hard to get deep.
Now BarCampGR2 was on a Saturday, and it drew a totally different crowd than usual. This was no during-the-work-week junket at a posh resort. Attendees were computer enthusiasts; people who do this stuff because we love it - or who love this stuff and wish they could do it for a living. There was a talk on robotics. Instead of asking "what do you do?" a better question for something like BarCamp is probably "What do you care about?"
In the mean time, I am also reviewing a book manuscript for Addison-Wesley on the process of evolutionary software development, learning to play golf (badly), and I owe a few blog posts on estimates.
More to come. Also, if you'd like to preview a couple chapters of the Scott Bain book on evolutionary development, let me know. If it's popular enough, I have permission to post (a little bit of it).
Thursday, July 19, 2007
Estimates - Laws
I'm a big fan of "Laws" of software development - Rules that express complex ideas in very few words. Moore's Law, for example, that computing power doubles (roughly) every eighteen months.
Or Jerry Weinberg's advice to not attribute to malice what can be explained by incompetance.
Or Paul Jorgensen's rule of large numbers in software development: Big Numbers times Big Numbers equals REALLY big numbers.
Now me, I'm working on a theory for software estimates, but here's another interlude, Heusser's Rule, which explains a lot of the ... odd behavior you'll see on software projects:
Most people really like to be able to sleep at night
This explains why people would rather be certain and wrong than uncertain and right. It explains why people hold on to bad ideas that should have been discredited long ago. It explains why people can force themselves that they will meet a deadline, despite a mountain of evidence to the contrary.
And it explains why they shoot the messenger. It's not that you're wrong; no, you are very much right. But by calling a spade a spade, you've robbed the entire room of the ability to sleep at night.
More soon.
Or Jerry Weinberg's advice to not attribute to malice what can be explained by incompetance.
Or Paul Jorgensen's rule of large numbers in software development: Big Numbers times Big Numbers equals REALLY big numbers.
Now me, I'm working on a theory for software estimates, but here's another interlude, Heusser's Rule, which explains a lot of the ... odd behavior you'll see on software projects:
Most people really like to be able to sleep at night
This explains why people would rather be certain and wrong than uncertain and right. It explains why people hold on to bad ideas that should have been discredited long ago. It explains why people can force themselves that they will meet a deadline, despite a mountain of evidence to the contrary.
And it explains why they shoot the messenger. It's not that you're wrong; no, you are very much right. But by calling a spade a spade, you've robbed the entire room of the ability to sleep at night.
More soon.
Wednesday, July 18, 2007
Estimates - Interlude
The previous post was on estimates for technologist in general.
Software Testers, things are a little more ... challenging. For example -
What quality will be the code be when it is delivered to test?
Most of the time, we don't know. Vast differences in quality could make testing take a longer - or shorter - length of time.
What is the management expectation for the software when it is released?
Most of the time, these expectations aren't quantified - if they are articulated at all.
Once identified, how quickly will the developers fix the bugs?
Unless you've had the same team for several projects, and the projects are similar, you probably won't know this. If the answer is "not that fast", then it's probably not a testing phase - it's a fixing phase - and it's not bound by the testers.
For every one hundred bugs the developers fix, how many new bugs will be injected?
I've seen teams where this number is fifty, and I've seen team where this number is much less - but I can't think of a team where this number is zero. Obviously, if this number is bigger, testing will take longer.
I could go on - the point is that we've got a whole lot of variables that are beyond the control of the testers. In fact, it's probably impossible for the testers to even know what those variables are. So how can you provide an estimate?
Offhand, I know of only one career field that has to deal with a similar problem. That is -- your friendly 401(k) investment advisor. I always thought that "advisor" was a bit of a misnomer, because providing advice is the one thing that guy doesn't do. Instead, he carefully explains that he doesn't know where social security will be in fifty years, and that he doesn't know the value of your house, how it will appreciate, and weather you will pay it off early or take home equity loans. He doesn't know what inflation will be, or how you investments will perform.
So he gives you a piece of paper with a big, complex formula, and tells you to figure it out for yourself. After all, he will say "It's your money."
Testing is an investment of time by a business owner. It is senior management's money, yet we couldn't get away with this, now could we?
Don't worry. I have a few ideas and solutions for you.
More tomorrow.
Software Testers, things are a little more ... challenging. For example -
What quality will be the code be when it is delivered to test?
Most of the time, we don't know. Vast differences in quality could make testing take a longer - or shorter - length of time.
What is the management expectation for the software when it is released?
Most of the time, these expectations aren't quantified - if they are articulated at all.
Once identified, how quickly will the developers fix the bugs?
Unless you've had the same team for several projects, and the projects are similar, you probably won't know this. If the answer is "not that fast", then it's probably not a testing phase - it's a fixing phase - and it's not bound by the testers.
For every one hundred bugs the developers fix, how many new bugs will be injected?
I've seen teams where this number is fifty, and I've seen team where this number is much less - but I can't think of a team where this number is zero. Obviously, if this number is bigger, testing will take longer.
I could go on - the point is that we've got a whole lot of variables that are beyond the control of the testers. In fact, it's probably impossible for the testers to even know what those variables are. So how can you provide an estimate?
Offhand, I know of only one career field that has to deal with a similar problem. That is -- your friendly 401(k) investment advisor. I always thought that "advisor" was a bit of a misnomer, because providing advice is the one thing that guy doesn't do. Instead, he carefully explains that he doesn't know where social security will be in fifty years, and that he doesn't know the value of your house, how it will appreciate, and weather you will pay it off early or take home equity loans. He doesn't know what inflation will be, or how you investments will perform.
So he gives you a piece of paper with a big, complex formula, and tells you to figure it out for yourself. After all, he will say "It's your money."
Testing is an investment of time by a business owner. It is senior management's money, yet we couldn't get away with this, now could we?
Don't worry. I have a few ideas and solutions for you.
More tomorrow.
Tuesday, July 17, 2007
Estimates - I
If I had to think of one subject that was not taught in school, and only covered in industry certifications in the most naïve way – it would be estimation.
It seems so simple. You figure out what needs to be done, figure out how to do it, break the how down into tasks, and add them up. This is pure functional decomposition, right? Easy as pie.
Sadly, In the real world, not so much. Do any of these sound familiar?
Forced Due Dates. In this case, nobody really does estimates. Or perhaps the boss does something quick, sloppy, and “aggressive.” In any event, the boss needs the project done in two weeks. Period.
'Someone else' does the estimates. Estimation is a skill, and it’s a skill that few junior staffers have. So having a senior staffer (or manager) do the estimates for a junior person might be helpful. The problem is that is separates the estimate from the person responsible. In other words, making someone else’s estimate is like spending someone else’s gift money – it is just detached enough from reality that you won’t be careful enough.
"Four weeks? Really? I could do it in one …" This is a manipulation technique used at the big status meeting, and I hate it. In some cases, the bully really can do it in one week, and it’s a great win for the company to switch. More often, all it takes is an offer to switch to shut the bully down. Junior people tend to buckle, promise the impossible, 'fail', and then look bad.
Bring me a rock estimates. "Hmm … three months? Can you shave a little time off that? … two months? We were looking for something a little more aggressive … One Month? OK, that sounds good. Remember, that is YOUR NUMBER, and we’ll hold you to that estimate!"
Slippery-Slope Requirements. Perhaps, by some miracle, you are allowed to estimate the project with your own date. Halfway through, the customer realizes that the requirements won’t meet his needs. This leaves you with two options – (A) Add the features and slip the date, making the project “late” or (B) Deny the features and deliver a product that doesn’t fit it’s purpose.
Why Estimates Are Important
In many cases, the "success" or "failure" of the project depends on weather or not we hit the date. In most of the scenarios above, that’s not a judgment of the technical contributors – it’s an evaluation of the person who made the original SWAG compared to reality.
In my next post in the series, I will discuss what we can do about it.
It seems so simple. You figure out what needs to be done, figure out how to do it, break the how down into tasks, and add them up. This is pure functional decomposition, right? Easy as pie.
Sadly, In the real world, not so much. Do any of these sound familiar?
Forced Due Dates. In this case, nobody really does estimates. Or perhaps the boss does something quick, sloppy, and “aggressive.” In any event, the boss needs the project done in two weeks. Period.
'Someone else' does the estimates. Estimation is a skill, and it’s a skill that few junior staffers have. So having a senior staffer (or manager) do the estimates for a junior person might be helpful. The problem is that is separates the estimate from the person responsible. In other words, making someone else’s estimate is like spending someone else’s gift money – it is just detached enough from reality that you won’t be careful enough.
"Four weeks? Really? I could do it in one …" This is a manipulation technique used at the big status meeting, and I hate it. In some cases, the bully really can do it in one week, and it’s a great win for the company to switch. More often, all it takes is an offer to switch to shut the bully down. Junior people tend to buckle, promise the impossible, 'fail', and then look bad.
Bring me a rock estimates. "Hmm … three months? Can you shave a little time off that? … two months? We were looking for something a little more aggressive … One Month? OK, that sounds good. Remember, that is YOUR NUMBER, and we’ll hold you to that estimate!"
Slippery-Slope Requirements. Perhaps, by some miracle, you are allowed to estimate the project with your own date. Halfway through, the customer realizes that the requirements won’t meet his needs. This leaves you with two options – (A) Add the features and slip the date, making the project “late” or (B) Deny the features and deliver a product that doesn’t fit it’s purpose.
Why Estimates Are Important
In many cases, the "success" or "failure" of the project depends on weather or not we hit the date. In most of the scenarios above, that’s not a judgment of the technical contributors – it’s an evaluation of the person who made the original SWAG compared to reality.
In my next post in the series, I will discuss what we can do about it.
Monday, July 16, 2007
Something old ...
I just got back from the wedding of a collegue.
In that spirit, I am reminded of the talk I gave last year at the Indiana Quality Conference - something James Bach recently refered to as a "Kick-Ass Podcast." (No really, his words, not mine.)
It's at the very bottom of the stack for Creative Chaos, so I thought I would let it bubble up -
The title is "So You're Doomed" - Here's a link to the PowerPoint (5MB)
And the Audio (45MB).
The audio is forty-five-ish minutes. I look forward to your feedback!
In that spirit, I am reminded of the talk I gave last year at the Indiana Quality Conference - something James Bach recently refered to as a "Kick-Ass Podcast." (No really, his words, not mine.)
It's at the very bottom of the stack for Creative Chaos, so I thought I would let it bubble up -
The title is "So You're Doomed" - Here's a link to the PowerPoint (5MB)
And the Audio (45MB).
The audio is forty-five-ish minutes. I look forward to your feedback!
Saturday, July 14, 2007
Test Automation - IV
Reads of Creation Chaos has left some amazing comments on the previous post; if you haven't read them, please take a gander.
First off, I agree with Shrini that "regression" has too many definitions, and we get confused by it's use. I think that most of the time, today, when people say "regression tests", they mean what Shrini calls type II regression - "Make sure stuff that works yesterday still works."
Yet, sadly, even with automated regression tests hooked up to a CI server, you still don't get that! All you get is "The tests that passed yesterday still passed today." Or, as I prefer to say it "Automated tests give you *some confidence* that no *big* defects were injected in the code since the last build."
For the most past, I've been writing about scripted test automation. For example, If we had a GetNextDate() function, we could pick two dozen different dates and run them again and again. Of course, if something breaks on a date that is not one of those twenty-four, the automated tests won't fix it.
That's where model driven tests can help. For example, instead of twenty-four pre-recorded tests, the software could pick a random date between 100BC and 2500AD, then call GetNextDate(), then pop up Microsoft Excel and ask for the date plus one - then compare the results. This can work as long as you have something like excel to trust. (Cem Kaner calls this "High Volume Test Automation.")
Another way to do it is to have a separate programmer write his or her own GetNextDate(), then pick random numbers and compare them. A few challenges with this -
1) In this case, you're literally coding it twice. It will literally take twice as long to develop using this approach.
2) If the two developers make the same mistake (which is likely - think about leap years) the two programs will work "just fine."
3) If the requirements are vague or wrong (how often does that happen?) the software could do what the developers expect but not what the customers want.
So here's my conclusions ...
A) Automate unit tests for simple bounds and equivalence classes
B) If you produce a single output, then a simple automation regression test is possible. ("If yesterday's output the same as today’s?") This will enable refactoring and diffing the two can show new functionality.
C) Documented acceptance tests prevent the "gee, I didn't mean that, I meant this"
phenomenon and get the customers involved as part of the team. Automating those can help with communication and be a formal specification - but it might not add a lot of value in terms of finding bugs.
D) Model-Driven tests have a lot of promise for finding those quirky odd bugs, especially in the GUI. But ...
E) When it comes to the "This just doesn't look right" kind of bugs, you'll probably want exploratory testing.
But that's just me talkin'. What do you think?
First off, I agree with Shrini that "regression" has too many definitions, and we get confused by it's use. I think that most of the time, today, when people say "regression tests", they mean what Shrini calls type II regression - "Make sure stuff that works yesterday still works."
Yet, sadly, even with automated regression tests hooked up to a CI server, you still don't get that! All you get is "The tests that passed yesterday still passed today." Or, as I prefer to say it "Automated tests give you *some confidence* that no *big* defects were injected in the code since the last build."
For the most past, I've been writing about scripted test automation. For example, If we had a GetNextDate() function, we could pick two dozen different dates and run them again and again. Of course, if something breaks on a date that is not one of those twenty-four, the automated tests won't fix it.
That's where model driven tests can help. For example, instead of twenty-four pre-recorded tests, the software could pick a random date between 100BC and 2500AD, then call GetNextDate(), then pop up Microsoft Excel and ask for the date plus one - then compare the results. This can work as long as you have something like excel to trust. (Cem Kaner calls this "High Volume Test Automation.")
Another way to do it is to have a separate programmer write his or her own GetNextDate(), then pick random numbers and compare them. A few challenges with this -
1) In this case, you're literally coding it twice. It will literally take twice as long to develop using this approach.
2) If the two developers make the same mistake (which is likely - think about leap years) the two programs will work "just fine."
3) If the requirements are vague or wrong (how often does that happen?) the software could do what the developers expect but not what the customers want.
So here's my conclusions ...
A) Automate unit tests for simple bounds and equivalence classes
B) If you produce a single output, then a simple automation regression test is possible. ("If yesterday's output the same as today’s?") This will enable refactoring and diffing the two can show new functionality.
C) Documented acceptance tests prevent the "gee, I didn't mean that, I meant this"
phenomenon and get the customers involved as part of the team. Automating those can help with communication and be a formal specification - but it might not add a lot of value in terms of finding bugs.
D) Model-Driven tests have a lot of promise for finding those quirky odd bugs, especially in the GUI. But ...
E) When it comes to the "This just doesn't look right" kind of bugs, you'll probably want exploratory testing.
But that's just me talkin'. What do you think?
Thursday, July 12, 2007
Test Automation - III
Charlie Audritsh asked:
"I take you to mean what I'd refer to as a regression test. A test of mostly the old functionality that maybe did not change much.
So yeah. I have to admit there's a low likelihood of finding bugs with this. What nags at me though about this idea is that I still feel like regression tests are pretty important. We want them *not* to find bugs after all. We want to know we did not break anything inadvertently, indirectly, by what we did change."
I believe our emphasis on regression testing is mostly historical accident, and here's why:
Big Projects (and LANs) became popular about ten years before version control became popular. So, at the beginning of my career, when dinosaurs still roamed the earth, while I was working on a bug fix, another developer might be working on new functionality. If I saved my fix before she saves her changes, then she would "step" on my changes and they would be lost.
Thus the software would "regress" - fall back to an earlier state where a bug was re-injected into the code. The longer you delayed integration, the more likely that was to happen.
Today we have version control with automatic merge, automated unit tests, and Continuous Integration Servers. So this huge tar pit of regression just isn't as bad as it used to be. In fact, I distinctly remember reading a paper that showed that about five percent of the bugs introduced in the wild today are "true" regression bugs - bugs re-introduced by a mistake.
Of course, there's the second type of regression, which is to make sure that everything that worked yesterday works today. I'm all for using every tool at our disposal to ensure that, but I find that automated customer acceptance tests (FITnesse with fixtures) are very expensive in terms of set-up time, yet don't offer much value. Sure, the customer can walk by at any time and press a button and see a green light. Cool. But in terms of finding and fixing bugs?
If there is never enough time to do all the testing we would like to on projects, then I believe we are obligated to do the testing that has the most value for the effort involved.
And when it comes to bugs, I believe this is critical, investigative work done by a human. Assuming the devs have good unit tests in place, re-running tests from last week (or creating a framework so you can) probably has a lot less value than critical investigation right now, in the moment.
... but don't get me wrong. For example, for performance testing, you need to use a tool, and type some stuff in, and then evaluate the results. I'm saying that writing code on top of that, to have a single button, then evaluate the results for you and green or redbar - might not be the best use of your time.
I am very much in favor of model driven testing, but with every release, the model needs to change to test the new functionality.
"Set it and Forget it" customer acceptance testing?
Not So Much ...
"I take you to mean what I'd refer to as a regression test. A test of mostly the old functionality that maybe did not change much.
So yeah. I have to admit there's a low likelihood of finding bugs with this. What nags at me though about this idea is that I still feel like regression tests are pretty important. We want them *not* to find bugs after all. We want to know we did not break anything inadvertently, indirectly, by what we did change."
I believe our emphasis on regression testing is mostly historical accident, and here's why:
Big Projects (and LANs) became popular about ten years before version control became popular. So, at the beginning of my career, when dinosaurs still roamed the earth, while I was working on a bug fix, another developer might be working on new functionality. If I saved my fix before she saves her changes, then she would "step" on my changes and they would be lost.
Thus the software would "regress" - fall back to an earlier state where a bug was re-injected into the code. The longer you delayed integration, the more likely that was to happen.
Today we have version control with automatic merge, automated unit tests, and Continuous Integration Servers. So this huge tar pit of regression just isn't as bad as it used to be. In fact, I distinctly remember reading a paper that showed that about five percent of the bugs introduced in the wild today are "true" regression bugs - bugs re-introduced by a mistake.
Of course, there's the second type of regression, which is to make sure that everything that worked yesterday works today. I'm all for using every tool at our disposal to ensure that, but I find that automated customer acceptance tests (FITnesse with fixtures) are very expensive in terms of set-up time, yet don't offer much value. Sure, the customer can walk by at any time and press a button and see a green light. Cool. But in terms of finding and fixing bugs?
If there is never enough time to do all the testing we would like to on projects, then I believe we are obligated to do the testing that has the most value for the effort involved.
And when it comes to bugs, I believe this is critical, investigative work done by a human. Assuming the devs have good unit tests in place, re-running tests from last week (or creating a framework so you can) probably has a lot less value than critical investigation right now, in the moment.
... but don't get me wrong. For example, for performance testing, you need to use a tool, and type some stuff in, and then evaluate the results. I'm saying that writing code on top of that, to have a single button, then evaluate the results for you and green or redbar - might not be the best use of your time.
I am very much in favor of model driven testing, but with every release, the model needs to change to test the new functionality.
"Set it and Forget it" customer acceptance testing?
Not So Much ...
Tuesday, July 10, 2007
Test Automation - II
I got some great comments yesterday - Charlie and Scott made some solid points, and they are points that I will address. However, before I get there, I would like to fill in a bit more of the back story from the Software-Testing List.
Here's my reply, the next day, after a small challenge by a guy named Mike Tierney:
Mike Tierney wrote:
"I would tend to agree with everything you have written or quoted Matt. But what about automated tests that are benefits for the tester ? I am not working in a TDD or extreme programming shop ... "
Two Replies:
1) I have seen test automation work well - heck, I write perl scripts that do model driven testing on a daily basis.
When I have seen system test automation work well, the tester started out with the context of the problem and built automation on top of that. Brett Pettichord's "Homebrew Test Automation", for example, struck quite a chord(*) with me. (You'll notice, for example, that Ward Cunningham's story follows that example.)
When it comes to context-free advice for a specific tool "Just use MacOSRunner and you'll be fine", I'm much more leery.
2) I like the Michael Schwern terminology for testing - I break tests into "Those the customers care about" (FIT, fitnesse, excel spreadsheets, etc.) and "Those the developers care about."
Many shops (including mine) have very technical testers, who do system testing that is more complex than what the customers care to track. They'll care enough to understand the work, maybe, but not to define or do it. I suspect that the work Mike Tierney does, if I looked at it scrupulously, would look like "Developer Tests", even though they are behavior (black-box) oriented.
But that's just me talking.
--heusser
(*) - Must ... Avoid ... Silly ... Puns ...
Here's my reply, the next day, after a small challenge by a guy named Mike Tierney:
Mike Tierney wrote:
"I would tend to agree with everything you have written or quoted Matt. But what about automated tests that are benefits for the tester ? I am not working in a TDD or extreme programming shop ... "
Two Replies:
1) I have seen test automation work well - heck, I write perl scripts that do model driven testing on a daily basis.
When I have seen system test automation work well, the tester started out with the context of the problem and built automation on top of that. Brett Pettichord's "Homebrew Test Automation", for example, struck quite a chord(*) with me. (You'll notice, for example, that Ward Cunningham's story follows that example.)
When it comes to context-free advice for a specific tool "Just use MacOSRunner and you'll be fine", I'm much more leery.
2) I like the Michael Schwern terminology for testing - I break tests into "Those the customers care about" (FIT, fitnesse, excel spreadsheets, etc.) and "Those the developers care about."
Many shops (including mine) have very technical testers, who do system testing that is more complex than what the customers care to track. They'll care enough to understand the work, maybe, but not to define or do it. I suspect that the work Mike Tierney does, if I looked at it scrupulously, would look like "Developer Tests", even though they are behavior (black-box) oriented.
But that's just me talking.
--heusser
(*) - Must ... Avoid ... Silly ... Puns ...
Test Automation - I
(Taken From a recent post fo the software-testing email list)
It always amazes me when strong people come out and say publicly something that I have been mulling on for a few months.
James Bach's comments on sapient processes do that for me. Let me give you the back story ...
I do a lot of developer-facing test automation. That's things like asserting the return values of a F-to-C temperture conversion function.
This has a lot of value to developers. It provides a functional spec to the maintenance programmer. It allows us (I am a tester/dev) to change the internals of the code and have some confidence that we didn't break anything. It allows the developer to experience the API and change it to be easier to use.
Just about all of those are benefits to the developer - not the end customer. For the most part, they are design, maintenance, and software engineering benefits. Increasingly, I am wary of test automation that promises to "prove that the software works." I find that on every release, the things that I need check are different - they are the things that have changed. Thus, the old test suite has less and less value as we add new functionality - and providing "hooks" and automation can be very expensive.
I think that Brian Marick's Recent Comments echo this thread. Customer-Facing test automation isn't all that great at, well, finding bugs, or proving that the software works.
Next month, I'll be out in New York, trying to make this (and similar) points at the Google Test Automation Conference. Wish me luck ...
It always amazes me when strong people come out and say publicly something that I have been mulling on for a few months.
James Bach's comments on sapient processes do that for me. Let me give you the back story ...
I do a lot of developer-facing test automation. That's things like asserting the return values of a F-to-C temperture conversion function.
This has a lot of value to developers. It provides a functional spec to the maintenance programmer. It allows us (I am a tester/dev) to change the internals of the code and have some confidence that we didn't break anything. It allows the developer to experience the API and change it to be easier to use.
Just about all of those are benefits to the developer - not the end customer. For the most part, they are design, maintenance, and software engineering benefits. Increasingly, I am wary of test automation that promises to "prove that the software works." I find that on every release, the things that I need check are different - they are the things that have changed. Thus, the old test suite has less and less value as we add new functionality - and providing "hooks" and automation can be very expensive.
I think that Brian Marick's Recent Comments echo this thread. Customer-Facing test automation isn't all that great at, well, finding bugs, or proving that the software works.
Next month, I'll be out in New York, trying to make this (and similar) points at the Google Test Automation Conference. Wish me luck ...
Friday, July 06, 2007
Why Creative Chaos? - II
The main title of this blog is "creative chaos." What exactly does that mean?
First of all, it is what I came up with after about fifteen seconds of thinking, so it is hard to claim that it's the result of deep thought about the nature of
software development and our need to change.
And yet, that is exactly what I am going to claim. Well, er ... sorta. "Creative Chaos" is not just a silly name; no, it symbolizes what I believe we are
missing from the world of software development, and what we need more of. It is Gestalt.
Here's why:
In North American Culture (and many other cultures) we tend to associate Order with Creation and Chaos with Destruction. The Middle Ages, for example, happened when Rome Fell, and that was destructive and Chaotic. Not only did technology stop advancing, it went backwards, and the world "forgot" about little things like engineering and indoor plumbing.
Again, in this worldview, "Order" is good. The modern military, with its focus on command and control, is what won us two World Wars.
The problem is that those are only half the picture. It is possible to have destructive order - consider the gardener who pulls up carrots not because they are mal-formed, but only in order to make his garden line up in perfect rows.
In business, destructive order is the Micro-Manager who has to direct every decision, while sapping the life-blood out of the workers.
Destructive Order is bad.
The problem is the logical fallacy causes blindness - we fail to recognize destructive order or creative chaos. This means rooting out destructive order (for example, time-wasting and mind-numbing templates and process) can be really hard - and encouraging creative chaos can also be hard.
Yet creative chaos is important for software development!
Here's a few of my favorite quotes:
"Tom Peters's, Thriving on Chaos amounts to a manifesto for Silicon Valley. It
places innovation, non-linearity, ongoing revolution at the center of its world view. Here in the Valley, innovation reigns supreme ..." - James Bach, 1994
"Suppose you have a software development project to do. For each traditional phase, you can draw from a pool of experienced people. Rather than have several designers do the design phase and have several coders do the construction phase, etc, you form a team by carefully selecting on person from each pool. During a team meeting, you will tell them that they have each been carefully chosen to do a project that is very important to the company, country, organization, or whatever. This unsettles them somewhat. You then give them a description of the problem to be solved, the figures for how much it cost in time and money to do similar projects, and what the performance figures for those systems are. Then, after you have gotten them used to the idea that they are special, having been specifically chosen to do an important job, you further unsettle the team by saying that their job is to produce the system in, say, half the time and money and it must have twice the performance of other systems. Next, you say that how they do it is their business. Your business is to support them in getting resources. Then, you leave them alone.
You stand by to give them advice if you are asked. You get their reports, which come regularly but not as often nor as voluminously as the waterfall model. But, mostly you wait. In something like the appointed time, out pops the system with the performance and cost figures you want.
Sounds like a fairy tale, doesn’t it?" - Wicked Problems, Righteous Solutions, Pg. 155
"I think probably there are a lot of workaday programmers working on upgrades to Enterprise Java (now I've insulted all the Java programmers) who never achieve flow. To them, it's just kind of engineering step by step; it's never the magic of creation." - Joel Spolsky
When I've seen the "Magic of Creation" on projects, it didn't come from order - it was spontaneous innovation. It happens when two engineers where chatting while playing foosball. It happened when some guy who reads my blog chatted me up at a conference - it happened at the coffee machine. I can't define it, can't describe it, can't write it down - but I can pursue it - and that can make all the difference.
First of all, it is what I came up with after about fifteen seconds of thinking, so it is hard to claim that it's the result of deep thought about the nature of
software development and our need to change.
And yet, that is exactly what I am going to claim. Well, er ... sorta. "Creative Chaos" is not just a silly name; no, it symbolizes what I believe we are
missing from the world of software development, and what we need more of. It is Gestalt.
Here's why:
In North American Culture (and many other cultures) we tend to associate Order with Creation and Chaos with Destruction. The Middle Ages, for example, happened when Rome Fell, and that was destructive and Chaotic. Not only did technology stop advancing, it went backwards, and the world "forgot" about little things like engineering and indoor plumbing.
Again, in this worldview, "Order" is good. The modern military, with its focus on command and control, is what won us two World Wars.
The problem is that those are only half the picture. It is possible to have destructive order - consider the gardener who pulls up carrots not because they are mal-formed, but only in order to make his garden line up in perfect rows.
In business, destructive order is the Micro-Manager who has to direct every decision, while sapping the life-blood out of the workers.
Destructive Order is bad.
The problem is the logical fallacy causes blindness - we fail to recognize destructive order or creative chaos. This means rooting out destructive order (for example, time-wasting and mind-numbing templates and process) can be really hard - and encouraging creative chaos can also be hard.
Yet creative chaos is important for software development!
Here's a few of my favorite quotes:
"Tom Peters's, Thriving on Chaos amounts to a manifesto for Silicon Valley. It
places innovation, non-linearity, ongoing revolution at the center of its world view. Here in the Valley, innovation reigns supreme ..." - James Bach, 1994
"Suppose you have a software development project to do. For each traditional phase, you can draw from a pool of experienced people. Rather than have several designers do the design phase and have several coders do the construction phase, etc, you form a team by carefully selecting on person from each pool. During a team meeting, you will tell them that they have each been carefully chosen to do a project that is very important to the company, country, organization, or whatever. This unsettles them somewhat. You then give them a description of the problem to be solved, the figures for how much it cost in time and money to do similar projects, and what the performance figures for those systems are. Then, after you have gotten them used to the idea that they are special, having been specifically chosen to do an important job, you further unsettle the team by saying that their job is to produce the system in, say, half the time and money and it must have twice the performance of other systems. Next, you say that how they do it is their business. Your business is to support them in getting resources. Then, you leave them alone.
You stand by to give them advice if you are asked. You get their reports, which come regularly but not as often nor as voluminously as the waterfall model. But, mostly you wait. In something like the appointed time, out pops the system with the performance and cost figures you want.
Sounds like a fairy tale, doesn’t it?" - Wicked Problems, Righteous Solutions, Pg. 155
"I think probably there are a lot of workaday programmers working on upgrades to Enterprise Java (now I've insulted all the Java programmers) who never achieve flow. To them, it's just kind of engineering step by step; it's never the magic of creation." - Joel Spolsky
When I've seen the "Magic of Creation" on projects, it didn't come from order - it was spontaneous innovation. It happens when two engineers where chatting while playing foosball. It happened when some guy who reads my blog chatted me up at a conference - it happened at the coffee machine. I can't define it, can't describe it, can't write it down - but I can pursue it - and that can make all the difference.
And now, for something completely different
An April Fool's joke I've been mulling that I turned serious at the end ...
Lean-Agile Six-Sigma Pragmatic Enterprise Testing
By Matthew Heusser
Agile Testing can streamline operational efficiencies – but is it really enterprise ready?
“Lean-Agile Six-Sigma Pragmatic CMMI Enterprise Testing” takes Agile Testing to a whole new level! Come learn the secrets of testing greats as we –
-> Learn how to streamline operational efficiencies without giving up on innovation and collaborative synergy!
-> Take advantage of process maturity and organizational process focus with CMMI!
-> Leverage Six Sigma for Pareto Perfection!
-> Sound impressive by dropping the words “pragmatic” and "enterprise" in all over the place!
-> ... and I got nothin’.
Without a secret sauce or magic potion, I have to do something radically different: Actually help you find more important bugs faster and assess the general state of the software more rapidly. I do this by explanation, demonstrations, and exercise.
Mostly, it’s about constantly sharpening your mind, and promising and delivering things that are actually within your capability.
If that’s what you want, take training from me. If it’s not enough … you can always try buzzword bingo. I hear CMMI+Scrum is a magic potion. (No, really, google it, or, hey, just Click Here.)
UPDATE:
Again, this was a joke that I posted to a private forum a few days ago. Today, I got the brochure for SD Best Practice 2007 in the mail, and it has, no joke, a talk entitled "Lean-Agile Test-Driven Scrum". So I felt morally obliged to post this publicly ...
Lean-Agile Six-Sigma Pragmatic Enterprise Testing
By Matthew Heusser
Agile Testing can streamline operational efficiencies – but is it really enterprise ready?
“Lean-Agile Six-Sigma Pragmatic CMMI Enterprise Testing” takes Agile Testing to a whole new level! Come learn the secrets of testing greats as we –
-> Learn how to streamline operational efficiencies without giving up on innovation and collaborative synergy!
-> Take advantage of process maturity and organizational process focus with CMMI!
-> Leverage Six Sigma for Pareto Perfection!
-> Sound impressive by dropping the words “pragmatic” and "enterprise" in all over the place!
-> ... and I got nothin’.
Without a secret sauce or magic potion, I have to do something radically different: Actually help you find more important bugs faster and assess the general state of the software more rapidly. I do this by explanation, demonstrations, and exercise.
Mostly, it’s about constantly sharpening your mind, and promising and delivering things that are actually within your capability.
If that’s what you want, take training from me. If it’s not enough … you can always try buzzword bingo. I hear CMMI+Scrum is a magic potion. (No, really, google it, or, hey, just Click Here.)
UPDATE:
Again, this was a joke that I posted to a private forum a few days ago. Today, I got the brochure for SD Best Practice 2007 in the mail, and it has, no joke, a talk entitled "Lean-Agile Test-Driven Scrum". So I felt morally obliged to post this publicly ...
Thursday, July 05, 2007
GreenBar is the New Black
(Or: My Code Odyssey)
So, I had a big, nasty codebase. No current documented requirements; a few scattered historical documents. New Requirements of: "Make it to what it did before, only add features (A), (B), and (C)."
Now, keep in mind, this is an interface that produces a text file that is used to produce invoices, bills of material, and actual, you know, CHECKS.
The immediate recommendation was a total re-write - but I did not have documented requirements and had a tight deadline. So I would have to reverse engineer the code, figure out what it was doing, and start over.
To paraphrase Joel Spolsky, when you do the ground-up rewrite, it takes you a year just to get where you started, AND you've got a whole new set of bugs.
So, instead, here is my approach:
(1) Create a consistent, repeatable test environment, where the code would produce the same output every time. (Believe it or not, this did not happen in the past, because some of the database code was missing an ORDER BY statement. The output was the same, but sometimes in different order.)
(2) Save a sample test run to disk.
(3) Make Changes to the code to make it more readable, understandable, easier to change, etc. (Entire month's worth of blog posts here.) Extract duplicate code to it's own function.
(4) Run the interface and diff with Number 2 above. If the files are identical, green light. If not, look at the changes from the previous commit to version control, and figure out the problem, and repeat the test run. I used a piece of software called oUnit to do the run and compare.
-----> The codebase is now a few thousand lines of code shorter, I understand it, a maintenance programmer can take over with less of a learning curve, and changes to the code are less scary.
I've also take the code and split up BuildStagingTables from FileGeneration, so that FileGen can run in ten minutes instead of thirty. This decreases my feedback cycle for test runs, which means that if I get a RedBar, I'll have fewer changes to go compare to find the bug.
I got all this finished about the 3rd of July - just in time to enjoy the holiday.
The stress seems to be mostly over, and I'll be blogging again. All for the need of a greenbar ...
(Coming Soon: Details on the code odyssey, and, also, rambling discussions of the value of customer-facing test automation. Don't Miss It!)
So, I had a big, nasty codebase. No current documented requirements; a few scattered historical documents. New Requirements of: "Make it to what it did before, only add features (A), (B), and (C)."
Now, keep in mind, this is an interface that produces a text file that is used to produce invoices, bills of material, and actual, you know, CHECKS.
The immediate recommendation was a total re-write - but I did not have documented requirements and had a tight deadline. So I would have to reverse engineer the code, figure out what it was doing, and start over.
To paraphrase Joel Spolsky, when you do the ground-up rewrite, it takes you a year just to get where you started, AND you've got a whole new set of bugs.
So, instead, here is my approach:
(1) Create a consistent, repeatable test environment, where the code would produce the same output every time. (Believe it or not, this did not happen in the past, because some of the database code was missing an ORDER BY statement. The output was the same, but sometimes in different order.)
(2) Save a sample test run to disk.
(3) Make Changes to the code to make it more readable, understandable, easier to change, etc. (Entire month's worth of blog posts here.) Extract duplicate code to it's own function.
(4) Run the interface and diff with Number 2 above. If the files are identical, green light. If not, look at the changes from the previous commit to version control, and figure out the problem, and repeat the test run. I used a piece of software called oUnit to do the run and compare.
-----> The codebase is now a few thousand lines of code shorter, I understand it, a maintenance programmer can take over with less of a learning curve, and changes to the code are less scary.
I've also take the code and split up BuildStagingTables from FileGeneration, so that FileGen can run in ten minutes instead of thirty. This decreases my feedback cycle for test runs, which means that if I get a RedBar, I'll have fewer changes to go compare to find the bug.
I got all this finished about the 3rd of July - just in time to enjoy the holiday.
The stress seems to be mostly over, and I'll be blogging again. All for the need of a greenbar ...
(Coming Soon: Details on the code odyssey, and, also, rambling discussions of the value of customer-facing test automation. Don't Miss It!)
Subscribe to:
Posts (Atom)