After a brief hiatus, I would like to return to the subject of technical debt.
First off, I don't think technical debt is a question of "Yes or No", but instead a question of "How Much." After all, everyone loves the company where the boss gives them all the time they need to implement the "right" solution without any pressure ... right until it goes out of business.
In that way, technical debt is not like real debt. In the real world, debt carries interest, and the best financial choice is to completely eliminate it. After all, ten bucks in the bank with no debt beats $10,000 in savings and $9,995 in loans - the interest on the loan will accumulate faster than cash. (Or, to quote Steve Poling "Debt is bad. Investments are good.")
But I believe the metaphor is generally solid. It is a concept that people can understand. It involves better and worse choices - the worse choices gives you an instant gratification (like buying that shiny new iPod on Credit) and yet hurts you in the long term. Unless you work really hard to check your financial statements and build some sort of family budget model, the cost of eating out a bit too much, concert tickets, and recurring monthly payments is really hard to measure. ("It's only $20 a month" = $240 a year. And exactly how many of those do we have, anyway? Land Phone, Two Cell Phones, Internet, Cable Tv ...)
Just like real debt, technical debt is an addiction cycle. Feeding the addition - buying something, or making a quick hack - can provide gratification in the short term - but make the long term worse. So you feel bad, and to feel better, you go to the mall. (Or, to hit the next impossible deadline, make another quick hack ...)
I had a friend in college who paid tuition with a credit card. When she got a statement asking for her monthly payment - I kid you not - she went to the ATM and withdrew money from the same card, deposited it in the bank, and sent a check to the credit card company.
Ho. Ly. Cow. Now, if we do a quick spreadsheet model of that we see ... wow. All she needed to change her behavior was the spreadsheet and a better idea.
Of course, Financial Debt is not the only form of addiction. Another common compulsion is overeating - which I just discussed in a recent post. Jerry Weinberg covers a systems analysis of overeating in his book - Quality Software Management, Volume I.
How do you get out of the weight gain trap? Surprise, surprise, you make the hidden trade offs visible, create an environment where successes are celebrated and set-backs are supported - and provide options. For example, at weight watchers, they have a weekly weight-in and provide healthy alternatives; carrots and celery instead of chocolate frosted sugar bombs and Mountain Dew. Now pounds lost ain't a great metric; muscle weighs more than fat, and it is possible to shed inches off your waist but maintain your weight. As I understand it, Weight Watches does take a holistic approach, and does measure multiple things - including calories.
Weight Loss. Finance. Human motivations. Metrics. We've got a lot more to cover in this area.
Is it time to start thinking about a peer workshop?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Sunday, December 30, 2007
Friday, December 28, 2007
Lighting Talks Web Page Up -
I am now officially in recruiting mode for lightning talks to ST&PCon, and have added a web page about lightning talks - here.
Thursday, December 13, 2007
Metrics
I'm overweight.
That means: 210 Lbs, 5'11". Lots of driving, airplanes, lots of coffee, lots of typing (coding, testing, writing), and raising kids will do that to you.
Yes, I coach soccer, which is an hour of light exercise a few times a month in-season.
I suppose I could look at someone else more overweight and say "hey, I'm not that bad off."
I could make a New Year's commitment to exercise and eat better; but I gave that one up this year in May for STAREast and never made it back.
The problems? First, Over-eating isn't *visible* to me - or to anyone else - except in a gradual form that is hard to notice. I don't have much energy and my clothes don't fit ... but I haven't had much energy in awhile, and I can always buy larger clothes. So I am trapped in an addiction cycle; I feel bad, so I eat, and feel better for a short while, but worse in the long term. So, the next day, I feel bad, so ...
Second, exercise is not convenient. Especially in the winter.
Now, If I could just make diet and exercise something that was visible to my friends, family and colleagues - something I could bask in glory for success, and something they could hound and decry me for failure. Then I might have a chance to break this addiction cycle. I think the key is to make it public.
So, here are a few things I'm going to do:
1) I purchased an elliptical trainer, so "It's too cold" is not a good excuse.
2) I created an account on Traineo.com.
Traineo is a metric manic's dream website.
You enter your weight as often as you check it, and it creates pretty graphs.
You enter your amount of exercise, and it creates pretty graphs - even calculating the calories you burned based on the type of exercise, time you spend, and intensity.
You can create a goal, and it will show how far you are from that goal, and how much time you have remaining.
And you can create custom metrics. Here's my site.
Here's my basic strategy:
If I work out three to four times a week on the elliptical, that should be enough to maintain, but not lose, weight.
So I want to do something else to lose the weight. How about give up Mountain Dew during the work week? A 20 Oz, twice a day, five days a week, that's 200 Oz less Mountain Dew.
I also often eat junk for breakfast. So I created a score for my breakfast eating; 0 is Cheerios or fruit; 5 is super sized McDonalds.
What this has to do with software development
I've been using traineo for four days now. Just four days into the system, I found that I was eating candy bars and other snacks.
That doesn't show up in the metrics!
So the metrics have some value, but are imperfect. Once you realize how to game them, it's pretty easy to abuse them and remove the value, if not make them downright misleading and harmful.
Does that sound familiar?
Over the holidays, I may not have time to blog, but I'll try to keep the traineo site up. I believe there will be additional insights into software that we can mine from it.
And, if you'd like to encourage me, please feel free to check out the site and see how I'm doin'. There is even a role called a traineo "motivator" where traineo emails you my stats weekly. If we've actually met in real life, and you're interested in being a motivator, let me know.
--heusser
PS - If you can't see the tie-in between technical debt, metrics, and weight yet - don't worry, it's coming ...
That means: 210 Lbs, 5'11". Lots of driving, airplanes, lots of coffee, lots of typing (coding, testing, writing), and raising kids will do that to you.
Yes, I coach soccer, which is an hour of light exercise a few times a month in-season.
I suppose I could look at someone else more overweight and say "hey, I'm not that bad off."
I could make a New Year's commitment to exercise and eat better; but I gave that one up this year in May for STAREast and never made it back.
The problems? First, Over-eating isn't *visible* to me - or to anyone else - except in a gradual form that is hard to notice. I don't have much energy and my clothes don't fit ... but I haven't had much energy in awhile, and I can always buy larger clothes. So I am trapped in an addiction cycle; I feel bad, so I eat, and feel better for a short while, but worse in the long term. So, the next day, I feel bad, so ...
Second, exercise is not convenient. Especially in the winter.
Now, If I could just make diet and exercise something that was visible to my friends, family and colleagues - something I could bask in glory for success, and something they could hound and decry me for failure. Then I might have a chance to break this addiction cycle. I think the key is to make it public.
So, here are a few things I'm going to do:
1) I purchased an elliptical trainer, so "It's too cold" is not a good excuse.
2) I created an account on Traineo.com.
Traineo is a metric manic's dream website.
You enter your weight as often as you check it, and it creates pretty graphs.
You enter your amount of exercise, and it creates pretty graphs - even calculating the calories you burned based on the type of exercise, time you spend, and intensity.
You can create a goal, and it will show how far you are from that goal, and how much time you have remaining.
And you can create custom metrics. Here's my site.
Here's my basic strategy:
If I work out three to four times a week on the elliptical, that should be enough to maintain, but not lose, weight.
So I want to do something else to lose the weight. How about give up Mountain Dew during the work week? A 20 Oz, twice a day, five days a week, that's 200 Oz less Mountain Dew.
I also often eat junk for breakfast. So I created a score for my breakfast eating; 0 is Cheerios or fruit; 5 is super sized McDonalds.
What this has to do with software development
I've been using traineo for four days now. Just four days into the system, I found that I was eating candy bars and other snacks.
That doesn't show up in the metrics!
So the metrics have some value, but are imperfect. Once you realize how to game them, it's pretty easy to abuse them and remove the value, if not make them downright misleading and harmful.
Does that sound familiar?
Over the holidays, I may not have time to blog, but I'll try to keep the traineo site up. I believe there will be additional insights into software that we can mine from it.
And, if you'd like to encourage me, please feel free to check out the site and see how I'm doin'. There is even a role called a traineo "motivator" where traineo emails you my stats weekly. If we've actually met in real life, and you're interested in being a motivator, let me know.
--heusser
PS - If you can't see the tie-in between technical debt, metrics, and weight yet - don't worry, it's coming ...
Greased Lightning (Talks)
Lightning Talks are ten five-minutes talks in a sixty-minute slot. As a speaker, I enjoy them because they sharpen my mind. There's no time for an introductory joke, no long agenda, no 15 minutes of "body" with 15 minutes of Q&A at the end. Nothing to hide behind. You get to make one point, make it well, and get off the stage. If you're full of it, the audience knows pretty quick. Then again, if you do a bad job, it's over quickly.
As a learner, I enjoy lightning talks for, well ... pretty much the same reasons. Plus, there's the chance to learn five or six things in a one-hour slot. Things that must be easily defensible and easy to implement, because they have to be explained in five minutes.
I will be moderating lightning talks in San Mateo, California in April for the ST&P Conference. That means I need speakers - and that means, (maybe), you! I'm afraid I cannot offer you a conference discount or freedbies; you'll have to speak for it's own benefit. But, if you want to try speaking by dipping your toe in the pool, or you are speaking at the conference but have the one odd thing in the corner that's itching you - please, drop me a line.
Note: I am using the term with written permission from Mark Jason Dominus, who introduced lightning talks at YAPC in 2001. If you are looking for more information on lightning talks or ideas, here are a few web pages:
About Lightning Talks - Dominus
Giving Lightning Talks - Mark Fowler
Here's an interesting lightning talk on YouTube: Sellenium Vs. Web Driver
Again, I'd love to hear from you. If you're interested in giving at lightning talk at ST&P, please, send me an email.
Tuesday, December 11, 2007
Testing Philosophy II -
About every four months, Shrini Kulkarni convinces me to drop the term "Test Automation" from my vocabulary. After all, testing is a creative, thinking process. We can automate some steps of what we are doing, speeding up repetitive work - but those don't turn out to be the investigative, critical thinking steps(*).
Still, I want to use a word to describe when I use automation to go faster. I use awkward terms like "automating the repetitive steps" or "build verification tests" for a few months, until I end up calling it test automation. Then Shrini emails me, and the cycle repeats.
Since I am on the down-stroke of one of those cycles, I thought it would be appropriate to link to an old paper of Brian Marick's:
I want to automate as many tests as I can. I’m not comfortable running a test only once. What if a programmer then changes the code and introduces a bug? What if I don’t catch that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?
Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be cost effective. It took me a long time, but I finally realized that I was over-automating, that only some of the tests I created should be automated. Some of the tests I was automating not only did not find bugs when they were rerun, they had no significant prospect of doing so. Automating them was not a rational decision.
The question, then, is how to make a rational decision. When I take a job as a contract tester, I typically design a series of tests for some product feature. For each of them, I need to decide whether that particular test should be automated. This paper describes how I think about the trade offs.
The paper is "When Should A Test Be Automated?" - and it is available on the web.
Now, that isn't exactly my philosophy, but I think it's good readin'.
Speaking of good readin'; if you are doing test automation by writing code in a true programming language to hit a GUI, you might enjoy this classic by Michael Hunter. (If you are not, it's still an interesting read, but everything after about page 12 is very specific about writing code libraries for application 'driving'cripting.)
And now, Shrini, I'm back on the up cycle. Really. I promise.
--heusser
(*) - It doesn't help that a lot of "test automation", especially in the 1990's, were snake oil products, designed by marketeers, to show how testing could be "sped up" with massive ROI numbers. I can't tell you how many boxes of GUI test automation software I have seen sitting, unused, on shelves. Lots - and even more than that are the stories I've heard from colleagues.
When I talk about test automation, that's not what I mean - see philosophy I for a better description.
Still, I want to use a word to describe when I use automation to go faster. I use awkward terms like "automating the repetitive steps" or "build verification tests" for a few months, until I end up calling it test automation. Then Shrini emails me, and the cycle repeats.
Since I am on the down-stroke of one of those cycles, I thought it would be appropriate to link to an old paper of Brian Marick's:
I want to automate as many tests as I can. I’m not comfortable running a test only once. What if a programmer then changes the code and introduces a bug? What if I don’t catch that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?
Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be cost effective. It took me a long time, but I finally realized that I was over-automating, that only some of the tests I created should be automated. Some of the tests I was automating not only did not find bugs when they were rerun, they had no significant prospect of doing so. Automating them was not a rational decision.
The question, then, is how to make a rational decision. When I take a job as a contract tester, I typically design a series of tests for some product feature. For each of them, I need to decide whether that particular test should be automated. This paper describes how I think about the trade offs.
The paper is "When Should A Test Be Automated?" - and it is available on the web.
Now, that isn't exactly my philosophy, but I think it's good readin'.
Speaking of good readin'; if you are doing test automation by writing code in a true programming language to hit a GUI, you might enjoy this classic by Michael Hunter. (If you are not, it's still an interesting read, but everything after about page 12 is very specific about writing code libraries for application 'driving'cripting.)
And now, Shrini, I'm back on the up cycle. Really. I promise.
--heusser
(*) - It doesn't help that a lot of "test automation", especially in the 1990's, were snake oil products, designed by marketeers, to show how testing could be "sped up" with massive ROI numbers. I can't tell you how many boxes of GUI test automation software I have seen sitting, unused, on shelves. Lots - and even more than that are the stories I've heard from colleagues.
When I talk about test automation, that's not what I mean - see philosophy I for a better description.
Monday, December 10, 2007
Ruby Vs. Python for Testing
There is an interesting discussion going on on the Agile-Testing Yahoo Group right now, about the differences between Python and Ruby.
It is oddly reminiscent of conversation I've had a dozen times recently - about the advantages of Ruby Vs. Perl. All of the old-school computer scientists are saying that there isn't much difference, while the newbies who don't know Python (or Perl) believe it's the Cat's Meow and the New New Thing.
I just put out a post with my two cents, which I will repeat here:
I was talking with a colleague about this and he pointed out that in Ruby you don't need semi colons at the end of lines or parenthesis around function calls.
So, instead of this kind of thing (this example is in perl):
CookMeal("Eggs", "Ham", "Bacon");
ServitUp();
You can write this:
CookMeal Eggs, Ham, Bacon
ServitUp
---> To a computer scientist, this looks like BASIC - junk made up for 6th graders to learn programming, not real computer science. We cry out "Give me your strong indentation requirements and clear end of line parsers! Give me Python or Perl or C++!"
Yet - to a customer who is defining acceptance tests, the second example looks a lot more like the English language and less like a magical incantation.
And that, I honestly believe, is the major reason to that DSLs for customer acceptance testing are more popular in ruby than in python.
--heusser
Note - To get rid of the quotes, you'd have to define an enum type, which you 'could' do in any language ...
It is oddly reminiscent of conversation I've had a dozen times recently - about the advantages of Ruby Vs. Perl. All of the old-school computer scientists are saying that there isn't much difference, while the newbies who don't know Python (or Perl) believe it's the Cat's Meow and the New New Thing.
I just put out a post with my two cents, which I will repeat here:
I was talking with a colleague about this and he pointed out that in Ruby you don't need semi colons at the end of lines or parenthesis around function calls.
So, instead of this kind of thing (this example is in perl):
CookMeal("Eggs", "Ham", "Bacon");
ServitUp();
You can write this:
CookMeal Eggs, Ham, Bacon
ServitUp
---> To a computer scientist, this looks like BASIC - junk made up for 6th graders to learn programming, not real computer science. We cry out "Give me your strong indentation requirements and clear end of line parsers! Give me Python or Perl or C++!"
Yet - to a customer who is defining acceptance tests, the second example looks a lot more like the English language and less like a magical incantation.
And that, I honestly believe, is the major reason to that DSLs for customer acceptance testing are more popular in ruby than in python.
--heusser
Note - To get rid of the quotes, you'd have to define an enum type, which you 'could' do in any language ...
Wednesday, December 05, 2007
My theory of software testing - I
What's the right mix of exploratory testing, "planned" manual testing, and test automation?
My short answer is "it depends." Now, you don't need to point out that "it depends" is non-helpful - I realize that - and I am going to try to go beyond it.
The reason I say "it depends" is that it depends on the type of problem you have. So one thing I can do to go farther is to list a bunch of possible problems, along with kinds of testing I have seen that fit.
1) The build verification test, or BVT. These are tests you run immediately after every build to make sure the software isn't completely hosed. In a typical windows app, this is - Install / File New / Add some stuff / File Save / Quit / Re Launch / File Open / The Screen should look like THIS. You add more tests over time, but the point is - if these tests fail, it was probably one big massive regression error, any additional testing information is suspect. The test team will work on other projects until the devs can get the BVT to pass.
BVT tests are incredibly boring, and often worth automating.
2) The Fahrenheit-To-Celsius Conversion test. Sure, you could have a human being test a hundred examples manually, but if you have a spreadsheet that you can use as a source of Truth, why not write a program that loops through all values from -10,000 to 10,000, by increments of 0.001, calls the function, does the math in the spreadsheet, and compares the two? Note that this does not provide information about Boundaries, which may be best explored - but it can help you build some confidence about the happy middle.
3) Model-Based Testing. Similar to #2, if the application is simple enough you can create a model of how the software should behave, then write tests to take "random walks" through the software. (Ben Simo has several good blog posts on this topic, including here.)
Despite the appeal of Model-Based Testing, it requires someone with true, deep testing expertise, true development expertise, modeling skill, and, usually, a gui-poking tool. So, despite all it's promise, the actual adoption of model-based testing has been ... less than stellar.
4) Unit Testing. This is as simple as "low-level code to test low-level code" - often much further down in the bowels than a manual tester would want to go, it provides the kind of "safety net" test automation suite makes a developer comfortable refactoring code. And devs need to do this; otherwise, maintenance changes are just sort of hacked onto the end, and, five years later, you've got a big ball of mud.
5) Isolation Problems. If you have a system that requires a stub or simulator to test (embedded system), you may need to want/need to write automated tests for it.
6) Macros. Any time you want to do something multiple times in a row to save yourself typing, you may want to do automation. Let us say, for example, you have a maintenance fix to a simple data extract. The new change will stop pulling X kind of employees from the database, and start pulling Y.
You could run the programs before and after, and manually compare the data.
OR you could use diff.
And, for that matter, you could write a SELECT COUNT(*) query or two from the database and see that:
The total number of new rows = (old rows + Y members - X member);
This is test automation. Notice that you don't have to have a CS degree to do #6 - and most of the others can be done by a tester pairing up with a developer.
So When should I not do automated testing?
1) To find boundaries. As I hinted at above, automated system-level tests are usually pretty bad at finding bounds. So if you've got a lot of boundary errors, by all means, test manually.
2) As Exploratory Testing. Sometimes when we test our goal is investigation; we play "20 questions" with the software. We learn about the software as we go. This kind of "exploratory testing" can be extremely effective; much more effective than mine-field automated testing.
3) When we don't need to repeat the tests. Some things only need to be tested once - Tab Order. Or may be best tested by a human, because they require some kind of complex analysis.
Sapient Testing
Human beings can be sapient; they can think. Humans can look at a screen and say "Something's not right ..." where a bitmap compare would simply fail or pass. Humans can look at a screen and intuit the most likely parts to fail. So the key is to use the technology to help automate the slow, boring, dumb tasks.
For example, at GTAC this year, one group said that it had 1,000 manual tests that ran overnight - recording a screen capture at the end of each. The next morning, a test engineer sits down with his coffee and compares those screen captures to the previous night - using his brain to figure out which differences matter, and which don't. After he finishes those build verification tests (in about a half hour), he can then get down to pushing the software to it's knees with more interactive, exploratory testing.
Now, that morning coffee .. Is that automated testing, or is it manual? My short answer is I don't really care.
These are just ideas, and they focus almost exclusively on the relationship between Dev and Test. I haven't talked about the relationship between Analyst and Test, for example. Don't take this as Gospel. More to come.
My short answer is "it depends." Now, you don't need to point out that "it depends" is non-helpful - I realize that - and I am going to try to go beyond it.
The reason I say "it depends" is that it depends on the type of problem you have. So one thing I can do to go farther is to list a bunch of possible problems, along with kinds of testing I have seen that fit.
1) The build verification test, or BVT. These are tests you run immediately after every build to make sure the software isn't completely hosed. In a typical windows app, this is - Install / File New / Add some stuff / File Save / Quit / Re Launch / File Open / The Screen should look like THIS. You add more tests over time, but the point is - if these tests fail, it was probably one big massive regression error, any additional testing information is suspect. The test team will work on other projects until the devs can get the BVT to pass.
BVT tests are incredibly boring, and often worth automating.
2) The Fahrenheit-To-Celsius Conversion test. Sure, you could have a human being test a hundred examples manually, but if you have a spreadsheet that you can use as a source of Truth, why not write a program that loops through all values from -10,000 to 10,000, by increments of 0.001, calls the function, does the math in the spreadsheet, and compares the two? Note that this does not provide information about Boundaries, which may be best explored - but it can help you build some confidence about the happy middle.
3) Model-Based Testing. Similar to #2, if the application is simple enough you can create a model of how the software should behave, then write tests to take "random walks" through the software. (Ben Simo has several good blog posts on this topic, including here.)
Despite the appeal of Model-Based Testing, it requires someone with true, deep testing expertise, true development expertise, modeling skill, and, usually, a gui-poking tool. So, despite all it's promise, the actual adoption of model-based testing has been ... less than stellar.
4) Unit Testing. This is as simple as "low-level code to test low-level code" - often much further down in the bowels than a manual tester would want to go, it provides the kind of "safety net" test automation suite makes a developer comfortable refactoring code. And devs need to do this; otherwise, maintenance changes are just sort of hacked onto the end, and, five years later, you've got a big ball of mud.
5) Isolation Problems. If you have a system that requires a stub or simulator to test (embedded system), you may need to want/need to write automated tests for it.
6) Macros. Any time you want to do something multiple times in a row to save yourself typing, you may want to do automation. Let us say, for example, you have a maintenance fix to a simple data extract. The new change will stop pulling X kind of employees from the database, and start pulling Y.
You could run the programs before and after, and manually compare the data.
OR you could use diff.
And, for that matter, you could write a SELECT COUNT(*) query or two from the database and see that:
The total number of new rows = (old rows + Y members - X member);
This is test automation. Notice that you don't have to have a CS degree to do #6 - and most of the others can be done by a tester pairing up with a developer.
So When should I not do automated testing?
1) To find boundaries. As I hinted at above, automated system-level tests are usually pretty bad at finding bounds. So if you've got a lot of boundary errors, by all means, test manually.
2) As Exploratory Testing. Sometimes when we test our goal is investigation; we play "20 questions" with the software. We learn about the software as we go. This kind of "exploratory testing" can be extremely effective; much more effective than mine-field automated testing.
3) When we don't need to repeat the tests. Some things only need to be tested once - Tab Order. Or may be best tested by a human, because they require some kind of complex analysis.
Sapient Testing
Human beings can be sapient; they can think. Humans can look at a screen and say "Something's not right ..." where a bitmap compare would simply fail or pass. Humans can look at a screen and intuit the most likely parts to fail. So the key is to use the technology to help automate the slow, boring, dumb tasks.
For example, at GTAC this year, one group said that it had 1,000 manual tests that ran overnight - recording a screen capture at the end of each. The next morning, a test engineer sits down with his coffee and compares those screen captures to the previous night - using his brain to figure out which differences matter, and which don't. After he finishes those build verification tests (in about a half hour), he can then get down to pushing the software to it's knees with more interactive, exploratory testing.
Now, that morning coffee .. Is that automated testing, or is it manual? My short answer is I don't really care.
These are just ideas, and they focus almost exclusively on the relationship between Dev and Test. I haven't talked about the relationship between Analyst and Test, for example. Don't take this as Gospel. More to come.
Tuesday, December 04, 2007
You keep using that word ...
Last week I wrote that:
There's something going on here with the way we use terms like "Test" and "Requirement" that causes confusion and misunderstanding. Fundamentally, various groups, like the traditional test community, the "Agile" Developer Community, the Scrum People, and so on, are looking for different benefits from testing. Thus, in conversations, we "miss" each other or leave unsatisfied ... Perhaps that's a post for another day.
Then I spent the next week in a private email discussion over this subject, started by Ben Simo. During that discussion, we identified two major different kinds of tests:
A) Examples to help with the construction process. For example, in a simple Yards-To-Meters conversion program, examples can show the bounds for input, can demonstrate how many decimal places to take the rounding, what to do with garbage input, and provide samples to check the forumula. You could argue that tests as examples can act as requirements. (You Would Be Wrong, but you could argue it). Personally, I hold that these kinds of "construction/example" tests can augment requirements, and that's a real good thing.
B) Tests to learn things about the software. These tests are investigative in nature, and make sure we are not "fooled" into thinking things like "The Software Works" without reason. These investigative tests generally require much more aggressive and advanced critical thinking skills, often in real time.
---> One problem that I've seen is that we talk about these things, "tests", but don't expressly talk about which of the two classes we mean - and, of course, there are more than two classes. On the discussion list, I think Ben summed it up best:
I think we have people thinking that automated "acceptance tests" can replace traditional critical testing. We now have some testers thinking that developers are going to be more involved in the critical testing. People coming from the two viewpoints don't seem to realize that they aren't all talking about the same thing. ... Although there are some testing ideas that are not compatible, I do believe that things like TDD and automated acceptance tests are good as long as people don't think that automated execution replaces restless thinkers. If I had to have one or the other, I want the restless thinkers. However, I hope we can have both.
- Simo
---> Now, for my thoughts. I've spent most of my career with a business card that said "Developer." When I started doing programmer-testing, I did actual critical thinking and "does it work" testing.
Then I ran into various Agile dev/testers that, well, weren't. They were using tests as examples and not finding bugs - or writing blogs, articles, or books about "testing" that didn't really talk about critical thinking.
My initial response to this was:
"What the hell is wrong with you people?"
After some time, I realized that a different philosophy about software testing leads to different results.
If those are the results you actually want, well ... I guess that's ok.
In the free market of ideas, these ideas will compete - and sometimes, complement each other.
May the system that delivers that best result win.
There's something going on here with the way we use terms like "Test" and "Requirement" that causes confusion and misunderstanding. Fundamentally, various groups, like the traditional test community, the "Agile" Developer Community, the Scrum People, and so on, are looking for different benefits from testing. Thus, in conversations, we "miss" each other or leave unsatisfied ... Perhaps that's a post for another day.
Then I spent the next week in a private email discussion over this subject, started by Ben Simo. During that discussion, we identified two major different kinds of tests:
A) Examples to help with the construction process. For example, in a simple Yards-To-Meters conversion program, examples can show the bounds for input, can demonstrate how many decimal places to take the rounding, what to do with garbage input, and provide samples to check the forumula. You could argue that tests as examples can act as requirements. (You Would Be Wrong, but you could argue it). Personally, I hold that these kinds of "construction/example" tests can augment requirements, and that's a real good thing.
B) Tests to learn things about the software. These tests are investigative in nature, and make sure we are not "fooled" into thinking things like "The Software Works" without reason. These investigative tests generally require much more aggressive and advanced critical thinking skills, often in real time.
---> One problem that I've seen is that we talk about these things, "tests", but don't expressly talk about which of the two classes we mean - and, of course, there are more than two classes. On the discussion list, I think Ben summed it up best:
I think we have people thinking that automated "acceptance tests" can replace traditional critical testing. We now have some testers thinking that developers are going to be more involved in the critical testing. People coming from the two viewpoints don't seem to realize that they aren't all talking about the same thing. ... Although there are some testing ideas that are not compatible, I do believe that things like TDD and automated acceptance tests are good as long as people don't think that automated execution replaces restless thinkers. If I had to have one or the other, I want the restless thinkers. However, I hope we can have both.
- Simo
---> Now, for my thoughts. I've spent most of my career with a business card that said "Developer." When I started doing programmer-testing, I did actual critical thinking and "does it work" testing.
Then I ran into various Agile dev/testers that, well, weren't. They were using tests as examples and not finding bugs - or writing blogs, articles, or books about "testing" that didn't really talk about critical thinking.
My initial response to this was:
"What the hell is wrong with you people?"
After some time, I realized that a different philosophy about software testing leads to different results.
If those are the results you actually want, well ... I guess that's ok.
In the free market of ideas, these ideas will compete - and sometimes, complement each other.
May the system that delivers that best result win.
Saturday, December 01, 2007
GASP? ... or not.
Doctors have a rule for sterilization, or "Always Wash Your Hands." Simple things that can be done to improve the outcome of every single project.
At the recent, GLSEC keynote, Bob Martin asked us "We were all doctors, and a hospital administrator called a meeting, telling us that we were spending too much time washing out hands - would we stop doing that?"
Of course not. Medical professionals know the risks involve in skipping that step, and simply won't take it.
Likewise, Accountants have a concepts of "Generally Acceptable Account Principles", or GAAP.
I started writing this blog entry intending to find some "Generally Acceptable Software Principles" (GASP). After all, if accountants and doctors can do it, why not us software guys?
And, in fact, I have a few. My colleague and friend Andy Lester does some speaking and consulting, and his immediate refrain is "Bug Tracking, Version Control, and Daily Backups." That is to say, if your software organization doesn't have these three things, trying to do anything else for "process improvement" is a waste of your time. Get Bug Tracking, Version Control, and Daily backups first.
I recall that Steve McConnell had a few talks on this subject, so I googled around, and found Software Development's Low Hanging Fruit and the Ten Most Important Ideas in Software Engineering.
Now, I like Steve McConnell. I've read his books, we have corresponded a bit, and I've quoted him quite a bit in my writing. For the most part, I like what he has to say. But his low-hanging fruit is nothing like what I would recommend, and his "ten most important ideas" reiterates the classic cost-of-bug-fix curve.
That might be true for some organizations, but it isn't true for all.
I got thinking about GASP because on one email thread this week, we discussed the possibility of having Test Driven Development go mainstream, and I wrote this:
I don't know if TDD will ever go mainstream. My prediction is that we will continue to have a large group of people who are doing a 'good enough' job of development that don't read books or go to conferences. Those people will become managers, and if they hire a new grad who knows TDD, > 80% of those new grads will just follow the prescribed, crappy process instead of looking for a unit test framework for Ada.
One of the great things about software development is that if you have an idea, you can go into your garage and build something cool and sell it. Then you grow, have to maintain legacy code, and process becomes important. All over the world we have companies on a great continuum, between life-critical software and video games for a PDA.
It would seem to me that to require software engineers to be licensed, to require, perhaps, an advanced degree (think law or medicine) and a board of examiners might increase our process, but it would create barriers to entry that would stop a lot of really smart 16-year-olds.
I was once a really smart 16-year old without so much as a high school diploma.
It seems to me that the best thing we can do as an industry, is to not outsource our discernment about what practices are "best" or "right" or "professional", but instead keep that responsibility to ourselves - and live with the consequences of carrying that responsibility. Then we can judge our practitioners by the output they produce.
What do you think?
At the recent, GLSEC keynote, Bob Martin asked us "We were all doctors, and a hospital administrator called a meeting, telling us that we were spending too much time washing out hands - would we stop doing that?"
Of course not. Medical professionals know the risks involve in skipping that step, and simply won't take it.
Likewise, Accountants have a concepts of "Generally Acceptable Account Principles", or GAAP.
I started writing this blog entry intending to find some "Generally Acceptable Software Principles" (GASP). After all, if accountants and doctors can do it, why not us software guys?
And, in fact, I have a few. My colleague and friend Andy Lester does some speaking and consulting, and his immediate refrain is "Bug Tracking, Version Control, and Daily Backups." That is to say, if your software organization doesn't have these three things, trying to do anything else for "process improvement" is a waste of your time. Get Bug Tracking, Version Control, and Daily backups first.
I recall that Steve McConnell had a few talks on this subject, so I googled around, and found Software Development's Low Hanging Fruit and the Ten Most Important Ideas in Software Engineering.
Now, I like Steve McConnell. I've read his books, we have corresponded a bit, and I've quoted him quite a bit in my writing. For the most part, I like what he has to say. But his low-hanging fruit is nothing like what I would recommend, and his "ten most important ideas" reiterates the classic cost-of-bug-fix curve.
That might be true for some organizations, but it isn't true for all.
I got thinking about GASP because on one email thread this week, we discussed the possibility of having Test Driven Development go mainstream, and I wrote this:
I don't know if TDD will ever go mainstream. My prediction is that we will continue to have a large group of people who are doing a 'good enough' job of development that don't read books or go to conferences. Those people will become managers, and if they hire a new grad who knows TDD, > 80% of those new grads will just follow the prescribed, crappy process instead of looking for a unit test framework for Ada.
One of the great things about software development is that if you have an idea, you can go into your garage and build something cool and sell it. Then you grow, have to maintain legacy code, and process becomes important. All over the world we have companies on a great continuum, between life-critical software and video games for a PDA.
It would seem to me that to require software engineers to be licensed, to require, perhaps, an advanced degree (think law or medicine) and a board of examiners might increase our process, but it would create barriers to entry that would stop a lot of really smart 16-year-olds.
I was once a really smart 16-year old without so much as a high school diploma.
It seems to me that the best thing we can do as an industry, is to not outsource our discernment about what practices are "best" or "right" or "professional", but instead keep that responsibility to ourselves - and live with the consequences of carrying that responsibility. Then we can judge our practitioners by the output they produce.
What do you think?
Thursday, November 29, 2007
Working with a recruiter?
I seem to hear the same kinds of questions asked over and over again. I recently replied to a thread on JoelOnSoftware about how/when to work with recruiters. I figured it was worth repeating here.
DISCLAIMER: What follows is my opinion. My experience is in the continental United States, and I assume you are pursuing a full-time, on-site employee role. I suggest working with a recruiter if you can find one of high enough calibre, but that should not be your job search alone. Larger companies tend to work with candidates directly, as will small ones; it's the medium-sized companies that tend to use recruiters - in my experience.
That said ...
My advice is that if you are in one specific geographic area, very carefully pick a single recruiter and work with them. The reason to do this is because if you are working with four recruiters, and each sends your resume to BigCo, BigCo may decline to hire you for legal reasons. (They want to avoid paying four recruiters fees, and they want to avoid a lawsuit - or, more likely, strained relations and a lot of wasted time.)
Recruiters from large, publicly-traded companies (K-Force) tend to be more respectable. Recruiters that show up to user's group meeting s and don't get laughed at or "Called out" can be especially good. Ask who in that user's group, that they know, have they placed. Use that as a reference.
Groups that do permanent placement are usually better than contracting houses.
If you are thinking of moving, sure, work with one recruiter from NYC and one from California and one from Florida.
Conclusion: Get references, shop around, and pick ONE.
DISCLAIMER: What follows is my opinion. My experience is in the continental United States, and I assume you are pursuing a full-time, on-site employee role. I suggest working with a recruiter if you can find one of high enough calibre, but that should not be your job search alone. Larger companies tend to work with candidates directly, as will small ones; it's the medium-sized companies that tend to use recruiters - in my experience.
That said ...
My advice is that if you are in one specific geographic area, very carefully pick a single recruiter and work with them. The reason to do this is because if you are working with four recruiters, and each sends your resume to BigCo, BigCo may decline to hire you for legal reasons. (They want to avoid paying four recruiters fees, and they want to avoid a lawsuit - or, more likely, strained relations and a lot of wasted time.)
Recruiters from large, publicly-traded companies (K-Force) tend to be more respectable. Recruiters that show up to user's group meeting s and don't get laughed at or "Called out" can be especially good. Ask who in that user's group, that they know, have they placed. Use that as a reference.
Groups that do permanent placement are usually better than contracting houses.
If you are thinking of moving, sure, work with one recruiter from NYC and one from California and one from Florida.
Conclusion: Get references, shop around, and pick ONE.
Monday, November 26, 2007
Agile Alliance Test Tools Workshop
Elisabeth Hendrickson recently hosted the Agile Alliance Functional Test Tools Workshop in Portland, Oregon. I am impressed by both the idea and the attendees - Ben Simo, Brian Marick, Gerard Mezaros, Jim Shore ... it's am impressive list.
While the stars did not align and I could not attend, the good news is that conversation is continuing over the internet.
First, there is the AA-FTT yahoo discussion group, which is open to the public.
Also, the lightning talks were video recorded that they are available on google video right here.
If you'd like to pick one to start with, Brian Marick's "Let Them Eat Cake" is especially interesting.
There's something going on here with the way we use terms like "Test" and "Requirement" that causes confusion and misunderstanding. Fundamentally, various groups, like the traditional test community, the "Agile" Developer Community, the Scrum People, and so on, are looking for different benefits from testing. Thus, in conversations, we "miss" each other or leave unsatisfied. Brian even did a second lighting talk that touches on this.
Perhaps that's a post for another day.
While the stars did not align and I could not attend, the good news is that conversation is continuing over the internet.
First, there is the AA-FTT yahoo discussion group, which is open to the public.
Also, the lightning talks were video recorded that they are available on google video right here.
If you'd like to pick one to start with, Brian Marick's "Let Them Eat Cake" is especially interesting.
There's something going on here with the way we use terms like "Test" and "Requirement" that causes confusion and misunderstanding. Fundamentally, various groups, like the traditional test community, the "Agile" Developer Community, the Scrum People, and so on, are looking for different benefits from testing. Thus, in conversations, we "miss" each other or leave unsatisfied. Brian even did a second lighting talk that touches on this.
Perhaps that's a post for another day.
Monday, November 12, 2007
Technical Debt - V
Overall, I haven't been happy with the Technical Debt Series.
For one thing, it reads like old cliches "Just don't do it" is just like advice to lose weight - eat less, exercise more.
People who are overweight are drawn by an entire system of forces to eat more and exercise less. The advantage to the less-healthy life is an immediate, short-term benefit but an erosion long-term. (Sound familiar?)
Telling someone who is overweight "Eat less, exercise more" doesn't help them. And, I am afraid, that's a lot of what into the series. (That, and brow-beat your manager.)
The sad reality is that when you cut corners, the "good job" and firm handshake of the boss is immediate, certain, and positive. The pain from cutting that corner is negative, delayed, and uncertain. Heck, if you're lucky, next time it might just be Somebody Else's Problem (SEP)!
This system of forces is very strong, yet completely invisible to many managers and executives. Thus, what option are technical folks left with but to browbeat management?
There has got to be a better way.
I gave a lightning talk at GLSEC on technical debt, and discussed it at some length with Steve Poling, who moderated lightning talks. Steve pointed out that the technical debt analogy is one that can resonate with managers and executives - people who understand money. His idea is that we study it further, creating a better explanation of the behavior, perhaps some measurements around it, some prescriptions to fix it, and then try those prescriptions, see if they work, and generate case studies.
That, my friends, is a lot of work. I believe it is worth it, so Steve and I are considering creating a non-profit, non-commercial Workshop On Technical Debt (WOTD). The workshop would be free to attendees, one to two days in length, and probably be located at a West Michigan College, probably around August.
If you are interested, leave a comment or shoot me an email.
More to come.
For one thing, it reads like old cliches "Just don't do it" is just like advice to lose weight - eat less, exercise more.
People who are overweight are drawn by an entire system of forces to eat more and exercise less. The advantage to the less-healthy life is an immediate, short-term benefit but an erosion long-term. (Sound familiar?)
Telling someone who is overweight "Eat less, exercise more" doesn't help them. And, I am afraid, that's a lot of what into the series. (That, and brow-beat your manager.)
The sad reality is that when you cut corners, the "good job" and firm handshake of the boss is immediate, certain, and positive. The pain from cutting that corner is negative, delayed, and uncertain. Heck, if you're lucky, next time it might just be Somebody Else's Problem (SEP)!
This system of forces is very strong, yet completely invisible to many managers and executives. Thus, what option are technical folks left with but to browbeat management?
There has got to be a better way.
I gave a lightning talk at GLSEC on technical debt, and discussed it at some length with Steve Poling, who moderated lightning talks. Steve pointed out that the technical debt analogy is one that can resonate with managers and executives - people who understand money. His idea is that we study it further, creating a better explanation of the behavior, perhaps some measurements around it, some prescriptions to fix it, and then try those prescriptions, see if they work, and generate case studies.
That, my friends, is a lot of work. I believe it is worth it, so Steve and I are considering creating a non-profit, non-commercial Workshop On Technical Debt (WOTD). The workshop would be free to attendees, one to two days in length, and probably be located at a West Michigan College, probably around August.
If you are interested, leave a comment or shoot me an email.
More to come.
Thursday, November 08, 2007
Overheard at GLSEC ...
Anytime I hear that "Failure is not an option", I think to myself that it's true. Failure is probably not an option ... it is assured.
-- Michael Bolton
-- Michael Bolton
Tuesday, November 06, 2007
See you in San Francisco ...
Well, almost. I will be in San Mateo, California, April 15-17th, for the Software Test&Performance Conference.
My conference talks will be on "Evolution, Revolution and Test Automation", "Reinventing Software Testing(*)", "So you're doomed", and, of course, lightning talks.
If you haven't worked with the folks at BZMedia before (the people behind STPCon and EclipseWorld), you might want to consider it. They are currently seeking authors and speakers on software performance testing topics. This will be my first STPCon, so I will certainly let you know how it goes.
In other news, It's just about time for GLSEC. So, if you'll excuse me, I gotta go prep ...
--heusser
(*) - Thanks to Sheri Valenti for that third title. I can only hope that my talk will be worthy of it.
My conference talks will be on "Evolution, Revolution and Test Automation", "Reinventing Software Testing(*)", "So you're doomed", and, of course, lightning talks.
If you haven't worked with the folks at BZMedia before (the people behind STPCon and EclipseWorld), you might want to consider it. They are currently seeking authors and speakers on software performance testing topics. This will be my first STPCon, so I will certainly let you know how it goes.
In other news, It's just about time for GLSEC. So, if you'll excuse me, I gotta go prep ...
--heusser
(*) - Thanks to Sheri Valenti for that third title. I can only hope that my talk will be worthy of it.
Friday, November 02, 2007
Software Architecture
Long-Time readers of Creative Chaos know that I am a bit frazzelled by the term "Software Architect." I don't think it means anything.
Or, perhaps, to put it another way: Perhaps it means everything?
The confusion of the word reminds me of the confusion over the term testing, which reminds me of Brett Pettichord's Four Schools of Software Testing.
It occurs to me that there are at least five distinct schools of computer architecture:
CPU Architecture: Highly specialized and different; slim to never confused with the items below
A CPU Architect looks a lot like: An electrical engineer
Exemplar: Multi-Core CPU’s
Systems Architecture: Interested in the technology stack used by the business – for example – HP/UX servers running Oracle as DB servers, linux web servers, desktop PC's with windows
A systems architect looks a lot like: A director of IT services
Exemplar: Service Level Agreements, Redundancy, Failover, Backups
Software Architecture: Interested in implementing various strategies to solve problems, such as Session, State, Domain Logic, Polymorphism, MVC, and so on
A Software Architect looks like: A highly-abstracted programmer
Exemplar: UML Diagrams
Organization Architect: Interested in how to seamlessly integrate people, processes and tools while speaking a common business language
A Organization Architect Looks a Lot Like: Some guy asking a bunch of questions and making your job harder
Exemplar: The Zachman Framework, “Enterprise” Architecture, The City Planning Analogy
Consulting Architecture: Interested in helping the customer and technical staff reach a shared understanding of the work, breaking the work up into manageable chunks, helping the customer understand the solution, and sticking around to see the solution implemented.
A Consulting Architect looks a lot like: What we used to call a 'systems analyst' in the 1980's
Exemplar: Story-Cards and a release schedule
This is really just a conceptual framework. Thus, when I get into arguments about the meanings of the word "Architecture", I can say "Oh, wait, you're coming from the consulting school" and do translation.
What do you think? Does this warrant further writing?
Or, perhaps, to put it another way: Perhaps it means everything?
The confusion of the word reminds me of the confusion over the term testing, which reminds me of Brett Pettichord's Four Schools of Software Testing.
It occurs to me that there are at least five distinct schools of computer architecture:
CPU Architecture: Highly specialized and different; slim to never confused with the items below
A CPU Architect looks a lot like: An electrical engineer
Exemplar: Multi-Core CPU’s
Systems Architecture: Interested in the technology stack used by the business – for example – HP/UX servers running Oracle as DB servers, linux web servers, desktop PC's with windows
A systems architect looks a lot like: A director of IT services
Exemplar: Service Level Agreements, Redundancy, Failover, Backups
Software Architecture: Interested in implementing various strategies to solve problems, such as Session, State, Domain Logic, Polymorphism, MVC, and so on
A Software Architect looks like: A highly-abstracted programmer
Exemplar: UML Diagrams
Organization Architect: Interested in how to seamlessly integrate people, processes and tools while speaking a common business language
A Organization Architect Looks a Lot Like: Some guy asking a bunch of questions and making your job harder
Exemplar: The Zachman Framework, “Enterprise” Architecture, The City Planning Analogy
Consulting Architecture: Interested in helping the customer and technical staff reach a shared understanding of the work, breaking the work up into manageable chunks, helping the customer understand the solution, and sticking around to see the solution implemented.
A Consulting Architect looks a lot like: What we used to call a 'systems analyst' in the 1980's
Exemplar: Story-Cards and a release schedule
This is really just a conceptual framework. Thus, when I get into arguments about the meanings of the word "Architecture", I can say "Oh, wait, you're coming from the consulting school" and do translation.
What do you think? Does this warrant further writing?
Thursday, November 01, 2007
Technical Debt - IV
Last Time I talked about how to avoid technical debt as a contributor. I am convinced that by un-training management you can both restore a sane life and decrease the debt.
So your manager, Timmy, doesn't come to you anymore. The problem is, he's got two other people who are (very) willing to take unprofessional shortcuts, so he goes to them instead.
How can you influence that behavior?
So, you overheard the conversation from yesterday – Timmy is hinting to a dev that he needs to take some shorts. The dev, John, hasn't given up yet, but you know he will.
Let's play this conversation:
You: Hey, John, how's it going?
John: Not good. Tim needs this function by Friday.
You: Really? Or what?
John: Oh. Wow. You know – I don't know. He just said we need it.
You: Will we go out of business?
John: Well, no, but we promised it to the customer.
You: Oh, you promised it the customer? Yeah, I understand. You've got to meet that commitment.
John: Uh, no. I never promised anything. I think the sales guy promised it or something.
You: Still, if Tim gave a qualified estimate to sales ...
John: No, dude. Tim didn't promise anything. They threw a date at us.
You: So ... what's the problem?
John: I've got to work every evening and do a crappy job to meet this unreasonable deadline, that's what!
You: Why?
John: Because TIM SAID SO!
You: Oh. Okay. What if you just don't?
(Silence
...
Crickets Chirp)
John: Well, Tim asked me to.
You: Okay. But the important thing is that it is your choice. You are talking like it is not your choice – like you are the victim of fate or something.
John: Well, I don't have a choice.
You: Sure you do. Go home at five. Do it right. That's what I did about the foo feature last month.
John: Yeah, I heard about that. Tim was pi-iised.
You: Yes, but I still have a job, don't I?
John: True, true. But you're not the go-to guy anymore, are you?
You: You mean the clever hack guy that gets more stress injected to his life in trade for being a 'team player'? No, that's not me anymore. Now it's you. Congratulations.
(Silence
...
More Crickets)
John: Well, I get a certain pride in being the go-to guy. Besides, I want to get promoted.
You: Oh. Oh. Okay. Promotions. Right. Does being the go to guy get you promoted? Who was the last go to guy that got promoted for that?
Silence.
John: Hey, nobody.
You: Riiight. So being the hero gets you more work, more stress, and keeps you pigeonholed in one specific sweet spot. So ...
John: But I LIKE being the go to guy!
You: Ok. That's fine. The important thing is to recongize that you are making a choice, and it's yours to make. I can respect that - just don't complain about it later.
(Later that day ... )
John:So, I'm going to tell Tim it'll take two weeks.
You: Make sure you give him options. He could assign me or Bob to help.
John: No way. You're working on that multi-million dollar bank thing. And Bob is working on that memory corruption thing.
You: So ' everyone else on the team is working on something more important and can't be spared?
John: Yes, I asked.
You: In that case, than the project is the LEAST important project in the department. That's not worth losing any sleep over.
Summary: There are three keys here.
A) You have to be successful personally in eliminating technical debt. You must have street cred; you've said no, kept your job, and restored sanity to your life.
B) You have to make it clear to the other person that adding technical debt, like working overtime, is a conscious choice. Don't let them play the victim "I have no choice" card. That is absconding responsibility. Making compromises happens every day - and it is a conscious choice we need to take responsibility for.
C) Don't let it turn into a Us Vs. Them conflict. The key is to have the contributor present an entire series of options to management. Perhaps one of them is to cut quality – in which case the manager is assuming debt, and, likely, risk. In that case, just ask for a statement of assent by email.
Your colleague might just get that email, and you might too – but overall, if you just follow my quick plan, you can vastly decrease the cruft and drag on your codebase, thus increasing your overall velocity.
That's not stealing from the company – it's investing in the company. And it's certainly not rude or insulting to management. Instead, it is honor management by asking them to, well, you know ... manage.
In the past posts, I've been tough on management. I'll close by turning the tip of the spear the other way – what enlightened management can do to prevent and pay off technical debt. (Those darn contributors with their clever hacks! That's right folks – no one is coming out of this unscathed ...)
So your manager, Timmy, doesn't come to you anymore. The problem is, he's got two other people who are (very) willing to take unprofessional shortcuts, so he goes to them instead.
How can you influence that behavior?
So, you overheard the conversation from yesterday – Timmy is hinting to a dev that he needs to take some shorts. The dev, John, hasn't given up yet, but you know he will.
Let's play this conversation:
You: Hey, John, how's it going?
John: Not good. Tim needs this function by Friday.
You: Really? Or what?
John: Oh. Wow. You know – I don't know. He just said we need it.
You: Will we go out of business?
John: Well, no, but we promised it to the customer.
You: Oh, you promised it the customer? Yeah, I understand. You've got to meet that commitment.
John: Uh, no. I never promised anything. I think the sales guy promised it or something.
You: Still, if Tim gave a qualified estimate to sales ...
John: No, dude. Tim didn't promise anything. They threw a date at us.
You: So ... what's the problem?
John: I've got to work every evening and do a crappy job to meet this unreasonable deadline, that's what!
You: Why?
John: Because TIM SAID SO!
You: Oh. Okay. What if you just don't?
(Silence
...
Crickets Chirp)
John: Well, Tim asked me to.
You: Okay. But the important thing is that it is your choice. You are talking like it is not your choice – like you are the victim of fate or something.
John: Well, I don't have a choice.
You: Sure you do. Go home at five. Do it right. That's what I did about the foo feature last month.
John: Yeah, I heard about that. Tim was pi-iised.
You: Yes, but I still have a job, don't I?
John: True, true. But you're not the go-to guy anymore, are you?
You: You mean the clever hack guy that gets more stress injected to his life in trade for being a 'team player'? No, that's not me anymore. Now it's you. Congratulations.
(Silence
...
More Crickets)
John: Well, I get a certain pride in being the go-to guy. Besides, I want to get promoted.
You: Oh. Oh. Okay. Promotions. Right. Does being the go to guy get you promoted? Who was the last go to guy that got promoted for that?
Silence.
John: Hey, nobody.
You: Riiight. So being the hero gets you more work, more stress, and keeps you pigeonholed in one specific sweet spot. So ...
John: But I LIKE being the go to guy!
You: Ok. That's fine. The important thing is to recongize that you are making a choice, and it's yours to make. I can respect that - just don't complain about it later.
(Later that day ... )
John:So, I'm going to tell Tim it'll take two weeks.
You: Make sure you give him options. He could assign me or Bob to help.
John: No way. You're working on that multi-million dollar bank thing. And Bob is working on that memory corruption thing.
You: So ' everyone else on the team is working on something more important and can't be spared?
John: Yes, I asked.
You: In that case, than the project is the LEAST important project in the department. That's not worth losing any sleep over.
Summary: There are three keys here.
A) You have to be successful personally in eliminating technical debt. You must have street cred; you've said no, kept your job, and restored sanity to your life.
B) You have to make it clear to the other person that adding technical debt, like working overtime, is a conscious choice. Don't let them play the victim "I have no choice" card. That is absconding responsibility. Making compromises happens every day - and it is a conscious choice we need to take responsibility for.
C) Don't let it turn into a Us Vs. Them conflict. The key is to have the contributor present an entire series of options to management. Perhaps one of them is to cut quality – in which case the manager is assuming debt, and, likely, risk. In that case, just ask for a statement of assent by email.
Your colleague might just get that email, and you might too – but overall, if you just follow my quick plan, you can vastly decrease the cruft and drag on your codebase, thus increasing your overall velocity.
That's not stealing from the company – it's investing in the company. And it's certainly not rude or insulting to management. Instead, it is honor management by asking them to, well, you know ... manage.
In the past posts, I've been tough on management. I'll close by turning the tip of the spear the other way – what enlightened management can do to prevent and pay off technical debt. (Those darn contributors with their clever hacks! That's right folks – no one is coming out of this unscathed ...)
Wednesday, October 31, 2007
Do you document your test cases?
We are debating the value of software testing standards right now on the context-driven testing list.
Here's my latest post ...
>and by the same token writing test cases doesn't make
>your testing worth any more than if you wrote
>NOTHING AT ALL.
For the broad, general case, James, I agree with you.
However, (to borrow a phrase) can you imagine a situation where this is not the case?
For example - instead of two pages of MS word documents per page, image one row in a spreadsheet, with five columns -
Who you should do
What we expect to happen
What actually happened
Pass/Fail
Notes
Your program is a Fahrenheit to Cecilius conversion. The requirements talk about the formula and give cases to 0, 32, 212 and 100 - but don't cover bounds or rounding.
The test cases cover bounds and rounding, and the customer views them and agrees.
In this case, test cases are a form of documentation. Heck, I wrote an article on it!
http://www.ddj.com/architect/199600262
My main problem with this is that using this side-effect logic, you aren't adding value to forensic and investigative process of figuring out if the software works.
In other words, your "test documentation" may help with something, but, at this point, it is not helping you test. So why call it test documentation?
Luckily, I can think of other examples. Say you have an API that does the conversion, and a test suite that looks like this:
my ($blnOk, $msg, $convert) = FahrToCel(5001);
ok(!$blnOk,'Limit of function is 5000');
ok($msg eq 'FahrToCel Limit Exceeded','And error message makes sense');
($blnOk, $msg, $convert) = FahrToCel(5000);
ok($blnOk,'5000 and under work fine');
ok($convert,2760);
----> These examples not only provide basic regression, they provide examples of the basic API for the maintenance programmer, and they get the easily, simple bugs out of the way so that we can focus on finding the real hard ones.
Sadly, I have to agree with James and Cem's comments. In the years that I have heard the mantra of "You must document your test cases", the few examples I saw had much more complexity and needless detail than the examples above, and we never automated at the API level with simple, straightforward code - meaning that the Return On Investment for the practice went way down.
Again, sadly, I suspect that's because the gurus had never actually, well ... done much of the stuff in the field.
And that, in a nutshell, is why I am involved in the context-driven community. :-)
Here's my latest post ...
>and by the same token writing test cases doesn't make
>your testing worth any more than if you wrote
>NOTHING AT ALL.
For the broad, general case, James, I agree with you.
However, (to borrow a phrase) can you imagine a situation where this is not the case?
For example - instead of two pages of MS word documents per page, image one row in a spreadsheet, with five columns -
Who you should do
What we expect to happen
What actually happened
Pass/Fail
Notes
Your program is a Fahrenheit to Cecilius conversion. The requirements talk about the formula and give cases to 0, 32, 212 and 100 - but don't cover bounds or rounding.
The test cases cover bounds and rounding, and the customer views them and agrees.
In this case, test cases are a form of documentation. Heck, I wrote an article on it!
http://www.ddj.com/architect/199600262
My main problem with this is that using this side-effect logic, you aren't adding value to forensic and investigative process of figuring out if the software works.
In other words, your "test documentation" may help with something, but, at this point, it is not helping you test. So why call it test documentation?
Luckily, I can think of other examples. Say you have an API that does the conversion, and a test suite that looks like this:
my ($blnOk, $msg, $convert) = FahrToCel(5001);
ok(!$blnOk,'Limit of function is 5000');
ok($msg eq 'FahrToCel Limit Exceeded','And error message makes sense');
($blnOk, $msg, $convert) = FahrToCel(5000);
ok($blnOk,'5000 and under work fine');
ok($convert,2760);
----> These examples not only provide basic regression, they provide examples of the basic API for the maintenance programmer, and they get the easily, simple bugs out of the way so that we can focus on finding the real hard ones.
Sadly, I have to agree with James and Cem's comments. In the years that I have heard the mantra of "You must document your test cases", the few examples I saw had much more complexity and needless detail than the examples above, and we never automated at the API level with simple, straightforward code - meaning that the Return On Investment for the practice went way down.
Again, sadly, I suspect that's because the gurus had never actually, well ... done much of the stuff in the field.
And that, in a nutshell, is why I am involved in the context-driven community. :-)
Monday, October 29, 2007
Microsoft Tester Center
Microsoft announced it's new 'community' for software testers last week at STARWest.
The 'community' (which is a fancy name for a combination webzine, blog, and forums) is located here.
Cynical people will point out that this is Microsoft's attempt to win the hearts and minds of software testers, thus, in five years, increasing the number of QA Managers who purchase Microsoft test products and hire and train people who use those products. This creates a 'gravity well' for Microsoft products.
To which I say: Bah.
Testers are pretty independent thinkers by nature. At least Microsoft is trying. It could be worse: They could be ignoring us.
While I may be skeptical, it can't hurt to check out the content on the site; I notice that they have Scott Barber listed on the syndicated blogger page.
And for those of you who are really cynical: No, nobody from Microsoft paid me any money, gave any favoritism, or any special favors for this link. In fact, Alan Page asked me to take a look, and, if it was valuable, pass it on.
But enough about what I think - check it out, and let me know in the comments if this link is valuable to you.
Still to come: More Technical Debt ...
The 'community' (which is a fancy name for a combination webzine, blog, and forums) is located here.
Cynical people will point out that this is Microsoft's attempt to win the hearts and minds of software testers, thus, in five years, increasing the number of QA Managers who purchase Microsoft test products and hire and train people who use those products. This creates a 'gravity well' for Microsoft products.
To which I say: Bah.
Testers are pretty independent thinkers by nature. At least Microsoft is trying. It could be worse: They could be ignoring us.
While I may be skeptical, it can't hurt to check out the content on the site; I notice that they have Scott Barber listed on the syndicated blogger page.
And for those of you who are really cynical: No, nobody from Microsoft paid me any money, gave any favoritism, or any special favors for this link. In fact, Alan Page asked me to take a look, and, if it was valuable, pass it on.
But enough about what I think - check it out, and let me know in the comments if this link is valuable to you.
Still to come: More Technical Debt ...
Thursday, October 25, 2007
Technical Debt - III
I promised to offer three ways to limit technical debt – Personally, how to impact members on the team you contribute on, and how to impact members on the team you manage. I would like to start with the first. Please be aware: This is not a guide to managing your manager, but how to avoid sticking yourself in the victim role. This is about how to do good work that you are proud of, so you can take responsibility for it.
As Steve Poling pointed out in the previous comments - sometimes you have to take on debt to stay afloat. You need to ship the product today to bring in the revenue; some of that revenue can be used to pay off the technical debt. I don't mean to stick you in a binary thought pattern - the question of technical debt is not yes or no but "how much."
These posts are just about techniques to use to keep you in control of your process, instead of becoming it's victim.
Let's start by analyzing a conversation that is happening, right now, all over the world:
Manager: "We need the foo feature by Friday."
Do-er: "I need two weeks."
Manager: "We need it Friday."
Do-er: "To do it right would take two weeks."
Manager: "Look, it doesn't have to be great. Just Done."
Do-er: "Well, I guess I could just hack in a conditional ..."
Manager: "Can you have that done by Friday?"
Do-er: "I guess. If I work in the evenings a little."
Manager: "GREAT! Do It."
Before we re-write this conversation, a little applied psychology here: By backing down and "finding a way", the contributor is training his manager to ask for unreasonable things. Think about it. The manager asks for more than what is possible, the contributor puts up a faux fight ... then yields. (Worse, the contributor is saying that his first estimate should not be trusted.)
Next time the manager needs something done "real quick now", who on the team is he going to go to? And what is he going to expect?
Also, notice that the quality shortcut is implicit. The contributor never says "In order to hit the date, I will short quality. I will add in a hard-to-understand, hard-to-test feature that will be undocumented and hard to maintain. I will increase the cost of all future development on this module by about two percent."
Instead, he said "Ok, boss", then maybe went home and kicked his dog. We don't know, but the cycle of miscommunication that starts with the boss (what does "gotta have it" mean, anyway?) has been continued.
As an industry, we have to stop doing this. We have to grow up.
Let's try this again:
Manager: "We need the foo feature by Friday."
Do-er: "I can have it done the following Friday – Oct 14"
Manager: "We need it Friday."
Do-er: "As it stands, given our departmental quality standards, I don’t know how to do what you are asking. I can, however, present a few options."
Manager: "Go on."
Do-er: "We could assign Sarah to write the frobble sub-feature. Joe could do the system and integration tests, so I could write the code. We could skip our quality standards, or we could ship on October 14"
Manager: "Sarah is busy doing XXY. Joe is busy doing ABC. We need that code on Friday!"
Do-er: "Ok. Send me an email approving the quality slip, and I'll do it."
Manager: "um ... what?"
Do-er: "I can hit the date if I take certain risks. If you approve and take responsibility for the risk, I can do it; I just need an email."
Manager: "What happens if it doesn't work?"
Do-er: "Well, you took responsibility for it and approved the risk. I'd say it's about 10% that we have a minor bug, 20% that it’s a show-stopper."
Manager: "There can be no show-stoppers!"
Do-er: "Ok. We could assign Joe to do the system and integration tests ..."
You see where this is going. The contributor is offering real options to management to make explicit tradeoffs. The manager, of course, doesn’t WANT to make those tradeoffs, and doesn't want to take responsibility for those tradeoffs – but we're not letting him off the hook. (Yes, I'm tough on managers here. Don't worry, I will be tough on contributors later.)
Is this confrontational? Yes ... a little. The key is to present three to five very valid options for manager. That's not a confrontation; it's asking the boss to be boss, to make the big decision.
Now, consider: If anything was less important, you could find someone else to help out. Thus, if everyone else in the department is working on something else and can't be spared, this functionality needed on Friday is actually the least important thing the team is working on! Also Consider: If we back down at this point, we sacrifice our life, make a compromise that is a violation of our professional standards, and incent management to continue the practice.
Finally, consider: Assuming you are competent and this is work on a legacy system, there really is no one else who can learn this. If the boss fires you tomorrow (he won't) and assigns someone else, it will take them two weeks to even figure out what the software is doing. He would be a fool to remove us. Unless annual review season is this very week, we have nothing to fear, and even then, we're talking about losing a percent or two off our raise; three at most.
In order to vastly decrease technical debt at the personal level, all we have to do is grow a spine. Of course, that is simple, not easy. And when it is a hack that will keep the cash flowing and prevent layoffs, it may just need to happen. (Hint: Those don't really happen that often.)
The example above got into some pretty ridiculous circles at the end. Sometimes, a manager or executive will not 'play fair', and will try to get the circles to continue until you give up, frustrated and tired. They may use coercion, power, or the threat of power.
By framing our shortcuts as choices with explicit compromises, we can show the true cost of cutting quality, and, six times out of ten, the result will be a choice with less technical debt. Maybe not none, but less.
The example above was contrived, so I will end with a real conversation from last week:
Project Manager: "Will we have the bar function today?"
Me: "No."
PM: "But ... we need it today!"
Me: "It doesn’t even have customer signoff. I couldn’t release it if I wanted to."
PM: "We promised bar TODAY!"
Me: "Was Johanna, our customer acceptance tester, in the room for that commitment?"
PM: "No."
Me: "Was I?" (I honestly did not remember; you could tell by my tone of voice)
PM: "No."
Me: "Was the engineering team lead?"
PM: "No."
Me: "Well, then, think of it this way. When the full-time analyst and the full-time developer for the bar function left, Johanna and I took over, in addition to our other full-time responsibilities, and we are going to deliver bar two or three days late. I don’t think that is a problem. I think it is something to celebrate!"
And I walked away quickly.
Sometimes, the way to be most effective in your job is to act as if you don't care about it.
As Steve Poling pointed out in the previous comments - sometimes you have to take on debt to stay afloat. You need to ship the product today to bring in the revenue; some of that revenue can be used to pay off the technical debt. I don't mean to stick you in a binary thought pattern - the question of technical debt is not yes or no but "how much."
These posts are just about techniques to use to keep you in control of your process, instead of becoming it's victim.
Let's start by analyzing a conversation that is happening, right now, all over the world:
Manager: "We need the foo feature by Friday."
Do-er: "I need two weeks."
Manager: "We need it Friday."
Do-er: "To do it right would take two weeks."
Manager: "Look, it doesn't have to be great. Just Done."
Do-er: "Well, I guess I could just hack in a conditional ..."
Manager: "Can you have that done by Friday?"
Do-er: "I guess. If I work in the evenings a little."
Manager: "GREAT! Do It."
Before we re-write this conversation, a little applied psychology here: By backing down and "finding a way", the contributor is training his manager to ask for unreasonable things. Think about it. The manager asks for more than what is possible, the contributor puts up a faux fight ... then yields. (Worse, the contributor is saying that his first estimate should not be trusted.)
Next time the manager needs something done "real quick now", who on the team is he going to go to? And what is he going to expect?
Also, notice that the quality shortcut is implicit. The contributor never says "In order to hit the date, I will short quality. I will add in a hard-to-understand, hard-to-test feature that will be undocumented and hard to maintain. I will increase the cost of all future development on this module by about two percent."
Instead, he said "Ok, boss", then maybe went home and kicked his dog. We don't know, but the cycle of miscommunication that starts with the boss (what does "gotta have it" mean, anyway?) has been continued.
As an industry, we have to stop doing this. We have to grow up.
Let's try this again:
Manager: "We need the foo feature by Friday."
Do-er: "I can have it done the following Friday – Oct 14"
Manager: "We need it Friday."
Do-er: "As it stands, given our departmental quality standards, I don’t know how to do what you are asking. I can, however, present a few options."
Manager: "Go on."
Do-er: "We could assign Sarah to write the frobble sub-feature. Joe could do the system and integration tests, so I could write the code. We could skip our quality standards, or we could ship on October 14"
Manager: "Sarah is busy doing XXY. Joe is busy doing ABC. We need that code on Friday!"
Do-er: "Ok. Send me an email approving the quality slip, and I'll do it."
Manager: "um ... what?"
Do-er: "I can hit the date if I take certain risks. If you approve and take responsibility for the risk, I can do it; I just need an email."
Manager: "What happens if it doesn't work?"
Do-er: "Well, you took responsibility for it and approved the risk. I'd say it's about 10% that we have a minor bug, 20% that it’s a show-stopper."
Manager: "There can be no show-stoppers!"
Do-er: "Ok. We could assign Joe to do the system and integration tests ..."
You see where this is going. The contributor is offering real options to management to make explicit tradeoffs. The manager, of course, doesn’t WANT to make those tradeoffs, and doesn't want to take responsibility for those tradeoffs – but we're not letting him off the hook. (Yes, I'm tough on managers here. Don't worry, I will be tough on contributors later.)
Is this confrontational? Yes ... a little. The key is to present three to five very valid options for manager. That's not a confrontation; it's asking the boss to be boss, to make the big decision.
Now, consider: If anything was less important, you could find someone else to help out. Thus, if everyone else in the department is working on something else and can't be spared, this functionality needed on Friday is actually the least important thing the team is working on! Also Consider: If we back down at this point, we sacrifice our life, make a compromise that is a violation of our professional standards, and incent management to continue the practice.
Finally, consider: Assuming you are competent and this is work on a legacy system, there really is no one else who can learn this. If the boss fires you tomorrow (he won't) and assigns someone else, it will take them two weeks to even figure out what the software is doing. He would be a fool to remove us. Unless annual review season is this very week, we have nothing to fear, and even then, we're talking about losing a percent or two off our raise; three at most.
In order to vastly decrease technical debt at the personal level, all we have to do is grow a spine. Of course, that is simple, not easy. And when it is a hack that will keep the cash flowing and prevent layoffs, it may just need to happen. (Hint: Those don't really happen that often.)
The example above got into some pretty ridiculous circles at the end. Sometimes, a manager or executive will not 'play fair', and will try to get the circles to continue until you give up, frustrated and tired. They may use coercion, power, or the threat of power.
By framing our shortcuts as choices with explicit compromises, we can show the true cost of cutting quality, and, six times out of ten, the result will be a choice with less technical debt. Maybe not none, but less.
The example above was contrived, so I will end with a real conversation from last week:
Project Manager: "Will we have the bar function today?"
Me: "No."
PM: "But ... we need it today!"
Me: "It doesn’t even have customer signoff. I couldn’t release it if I wanted to."
PM: "We promised bar TODAY!"
Me: "Was Johanna, our customer acceptance tester, in the room for that commitment?"
PM: "No."
Me: "Was I?" (I honestly did not remember; you could tell by my tone of voice)
PM: "No."
Me: "Was the engineering team lead?"
PM: "No."
Me: "Well, then, think of it this way. When the full-time analyst and the full-time developer for the bar function left, Johanna and I took over, in addition to our other full-time responsibilities, and we are going to deliver bar two or three days late. I don’t think that is a problem. I think it is something to celebrate!"
And I walked away quickly.
Sometimes, the way to be most effective in your job is to act as if you don't care about it.
Monday, October 22, 2007
Technical Debt II.5 -
I've been struggling for the past two weeks to put up a post on technical debt that doesn't sound like passive-aggressive "how to manage your boss" or "how to trick your boss."
That is not my intent.
So, while I may post one of my working copies later, I'd like to make this clear now.
Imagine a carpenter who is a professional craftsman. The carpenter is brought into a job and told to "hurry it up - we have to have the framing complete by November 1st."
Now, carpentry is more fungible than technical work; if you have a standard pattern, it is much easier to throw a half-dozen extra bodies on it to hit the date. The carpenter brings this up to the general contractor, who says "No. Work unpaid overtime if you have to, but HIT THE DATE!!"
What is the carpenter going to do?
Well, he could slack off a _bit_. He might save five or ten percent of his time by doing a poor job, that would decrease the lifespan of the house substantially.
But as a homeowner, you don't want him to do that, do you?
Moreover, the carpenter probably went though an apprentice program for three or more years, where he saw reasonable standards, and, possibly, heard a journeyman say "no" a couple of times.
If he loses the job, so be it. In most cases, carpentry is contract work, and there are plenty of people looking to build houses. The point is, he is responsible for doing high quality work to an ethical standard. To paraphrase Richard Bach, you can either own your process ... or be it's victim.
We lack and understand of craft in Technical work. The job training programs we have are largely academic, not On The Job Apprenticeships - we don't know how to respond.
So we blame management, take shortcuts we should not, and act like victims.
Shame on us.
Shame on us!
Please keep that spirit in mind when I post the next article in the series. I am still not 100% confident in it. If you know me personally, please feel free to email me and ask for a preview before I post it; I am interested in your opinions.
--heusser
That is not my intent.
So, while I may post one of my working copies later, I'd like to make this clear now.
Imagine a carpenter who is a professional craftsman. The carpenter is brought into a job and told to "hurry it up - we have to have the framing complete by November 1st."
Now, carpentry is more fungible than technical work; if you have a standard pattern, it is much easier to throw a half-dozen extra bodies on it to hit the date. The carpenter brings this up to the general contractor, who says "No. Work unpaid overtime if you have to, but HIT THE DATE!!"
What is the carpenter going to do?
Well, he could slack off a _bit_. He might save five or ten percent of his time by doing a poor job, that would decrease the lifespan of the house substantially.
But as a homeowner, you don't want him to do that, do you?
Moreover, the carpenter probably went though an apprentice program for three or more years, where he saw reasonable standards, and, possibly, heard a journeyman say "no" a couple of times.
If he loses the job, so be it. In most cases, carpentry is contract work, and there are plenty of people looking to build houses. The point is, he is responsible for doing high quality work to an ethical standard. To paraphrase Richard Bach, you can either own your process ... or be it's victim.
We lack and understand of craft in Technical work. The job training programs we have are largely academic, not On The Job Apprenticeships - we don't know how to respond.
So we blame management, take shortcuts we should not, and act like victims.
Shame on us.
Shame on us!
Please keep that spirit in mind when I post the next article in the series. I am still not 100% confident in it. If you know me personally, please feel free to email me and ask for a preview before I post it; I am interested in your opinions.
--heusser
Tuesday, October 09, 2007
Technical Debt - Interlude
Something to read while I am on vacation ...
Tom DeMarco's Clasic, Excellent, Famous Essay "Why Does Software Cost So Much?"
Update: I am creating a new label (or "tag") for this post, titled "Software Engineering Classics." As I add links to these classic articles and books, you will be able to search for "Software Engineering Classics" on the blog and get the whole list.
My goal is to provide a quick link of everything you should read in a good master's program in Commercial (not-government) Software Engineering.
--heusser
Tom DeMarco's Clasic, Excellent, Famous Essay "Why Does Software Cost So Much?"
Update: I am creating a new label (or "tag") for this post, titled "Software Engineering Classics." As I add links to these classic articles and books, you will be able to search for "Software Engineering Classics" on the blog and get the whole list.
My goal is to provide a quick link of everything you should read in a good master's program in Commercial (not-government) Software Engineering.
--heusser
Tuesday, October 02, 2007
Technical Debt - II
Before I jump to solutions, I’d like to take a moment and talk about entropy.
(From the site "If you assert that nature tends to take things from order to disorder and give an example or two, then you will get almost universal recognition and assent. It is a part of our common experience. Spend hours cleaning your desk, your basement, your attic, and it seems to spontaneously revert back to disorder and chaos before your eyes. So if you say that entropy is a measure of disorder, and that nature tends toward maximum entropy for any isolated system, then you do have some insight into the ideas of the second law of thermodynamics." -> It's a good read.)
If you think about it, every human accomplishment is a temporary battle against entropy.
Great Buildings are built, last hundreds of years ... and fall.
Cities become kingdoms that become empires, reign thousands of years ... and fall.
Even the great sphinx of Egypt, built in a warm, dry climate with little weather, is feeling the effects of age.
So it is with software. The typical software system might last ten years, fifteen if it is wonderful, but, eventually, the center will not hold. The code will not work on the new operating system, the company will fold, or the company will merge and turn off the legacy system in favor of the new.
In some ways, developing software is a battle against entropy. In some cases, just getting the code into production is a losing battle.
Once the code is in production, the battle starts all over again. Each maintenance change is the opportunity to write a clever hack or make an improvement. In each case, we can trade a little time now for a system that is slightly worse. With each change, the overall impact is un-noticable – but add them up, and we’ve got a mess.
At the same time, no one wants to develop the perfect, new new thing, with beautiful code and perfect documentation – only to be six months late, miss a market window, and have the company go out of business.
The question isn't "to accumulate technical debt, yes or no?", but instead "Just how great can we make this system given our constraints?" (Little things like time and money), "How can we trade off or ignore those constraints in order to have less debt?"
In other words "How much can we invest in this system to save time and money later?"
That is a hard question. I assert that is must be asked and decided consciously. Socrates said the unexamined life is not worth living, and I would add that the unexamined process results in ... crappy products.
We all know the "right thing to do." It's just hard to do it.
So I would like to propose three ways to lead conversations towards less technical debt, focused on –
A) What *you* can do as a technical contributor
B) What you can do as a technical contributor to influence other members of the team
C) Options for management
See you next time ...
(From the site "If you assert that nature tends to take things from order to disorder and give an example or two, then you will get almost universal recognition and assent. It is a part of our common experience. Spend hours cleaning your desk, your basement, your attic, and it seems to spontaneously revert back to disorder and chaos before your eyes. So if you say that entropy is a measure of disorder, and that nature tends toward maximum entropy for any isolated system, then you do have some insight into the ideas of the second law of thermodynamics." -> It's a good read.)
If you think about it, every human accomplishment is a temporary battle against entropy.
Great Buildings are built, last hundreds of years ... and fall.
Cities become kingdoms that become empires, reign thousands of years ... and fall.
Even the great sphinx of Egypt, built in a warm, dry climate with little weather, is feeling the effects of age.
So it is with software. The typical software system might last ten years, fifteen if it is wonderful, but, eventually, the center will not hold. The code will not work on the new operating system, the company will fold, or the company will merge and turn off the legacy system in favor of the new.
In some ways, developing software is a battle against entropy. In some cases, just getting the code into production is a losing battle.
Once the code is in production, the battle starts all over again. Each maintenance change is the opportunity to write a clever hack or make an improvement. In each case, we can trade a little time now for a system that is slightly worse. With each change, the overall impact is un-noticable – but add them up, and we’ve got a mess.
At the same time, no one wants to develop the perfect, new new thing, with beautiful code and perfect documentation – only to be six months late, miss a market window, and have the company go out of business.
The question isn't "to accumulate technical debt, yes or no?", but instead "Just how great can we make this system given our constraints?" (Little things like time and money), "How can we trade off or ignore those constraints in order to have less debt?"
In other words "How much can we invest in this system to save time and money later?"
That is a hard question. I assert that is must be asked and decided consciously. Socrates said the unexamined life is not worth living, and I would add that the unexamined process results in ... crappy products.
We all know the "right thing to do." It's just hard to do it.
So I would like to propose three ways to lead conversations towards less technical debt, focused on –
A) What *you* can do as a technical contributor
B) What you can do as a technical contributor to influence other members of the team
C) Options for management
See you next time ...
Sunday, September 30, 2007
On Technical Debt - I
Andy Lester was in town last week, and he did a wonderful talk on technical debt.
Andy's main point seemed to be that technical debt (like real debt) is a drag on the project. By taking shortcuts today (in documentation, or coding, or skipping tests - or cutting and pasting when we should be generalising) we create the appearance of progress, but slow down future progress.
Eventually, even small maintenance tasks take a tremendous amount of effort, not because the work is complex, but simply to pay off the interest. The customer "just" asks for one change, but it has to be implemented in fifteen places, and the code is hard to understand and wacky, and no documentation exists.
Andy even presents a "five step plan" to pay off the debt, much like any "real" debt reduction plan.
But I'm disappointed in just one way: I didn't see any talk of the root cause of technical debt. There must be one; pressure to meet deadlines (and the skipping of corners that tends to entail) seems to be universal.
Until you address the root cause, I suspect that any "technical debt reduction" plan will fail.
To understand how "technical compromises" happen, let's take one example.
I (Matt) am under pressure to hit a deadline.
If I cut a corner, and do a "bad job", I will still hit the deadline - a Positive, Immediate, Certain result. If there is any NEGATIVE result, it is uncertain, out in the future, if ever.
If I do it "right", I will miss the deadline. I could get a lecture from my boss, the customer, or both. I may be written up for not being a "team player" on my annual eval. That is negative, certain ... and immediate.
Behavioral Psychology tells us that positive rewards are more powerful than negative. It also tells us that immediate rewards are more powerful than delayed. Finally, it just makes sense that certain rewards are more powerful than uncertain.
Which may explain my (slight) weight problem. A mountain dew will TASTE really good *right now*. It's positive, certain, and immediate. Not only that, one single drink won't make me fat. Yet the combination of those choices, over time will certainly make me fat - and habitually fat, to boot.
In software, the bad choice is the clever hack, done without improving the design. The extra if () { } block thrown around the code. Cruft. Files hanging around that should have been deleted last month. One or two or these won't kill you - in fact, they may certainly be a short-term gain. But a dozen? A hundred? A thousand?
It doesn't take a genius to figure out why shortcuts happen in code: The incentives are misaligned, just like weight gain. Unless you do something to change those incentives, exhorting the team to "Do The Right Thing" will be just more cheerleading, like "Zero Defects" was in the 1990's and "TQM" in the 1980's.
How can we change those incentives?
Let's talk about that next time.
Andy's main point seemed to be that technical debt (like real debt) is a drag on the project. By taking shortcuts today (in documentation, or coding, or skipping tests - or cutting and pasting when we should be generalising) we create the appearance of progress, but slow down future progress.
Eventually, even small maintenance tasks take a tremendous amount of effort, not because the work is complex, but simply to pay off the interest. The customer "just" asks for one change, but it has to be implemented in fifteen places, and the code is hard to understand and wacky, and no documentation exists.
Andy even presents a "five step plan" to pay off the debt, much like any "real" debt reduction plan.
But I'm disappointed in just one way: I didn't see any talk of the root cause of technical debt. There must be one; pressure to meet deadlines (and the skipping of corners that tends to entail) seems to be universal.
Until you address the root cause, I suspect that any "technical debt reduction" plan will fail.
To understand how "technical compromises" happen, let's take one example.
I (Matt) am under pressure to hit a deadline.
If I cut a corner, and do a "bad job", I will still hit the deadline - a Positive, Immediate, Certain result. If there is any NEGATIVE result, it is uncertain, out in the future, if ever.
If I do it "right", I will miss the deadline. I could get a lecture from my boss, the customer, or both. I may be written up for not being a "team player" on my annual eval. That is negative, certain ... and immediate.
Behavioral Psychology tells us that positive rewards are more powerful than negative. It also tells us that immediate rewards are more powerful than delayed. Finally, it just makes sense that certain rewards are more powerful than uncertain.
Which may explain my (slight) weight problem. A mountain dew will TASTE really good *right now*. It's positive, certain, and immediate. Not only that, one single drink won't make me fat. Yet the combination of those choices, over time will certainly make me fat - and habitually fat, to boot.
In software, the bad choice is the clever hack, done without improving the design. The extra if () { } block thrown around the code. Cruft. Files hanging around that should have been deleted last month. One or two or these won't kill you - in fact, they may certainly be a short-term gain. But a dozen? A hundred? A thousand?
It doesn't take a genius to figure out why shortcuts happen in code: The incentives are misaligned, just like weight gain. Unless you do something to change those incentives, exhorting the team to "Do The Right Thing" will be just more cheerleading, like "Zero Defects" was in the 1990's and "TQM" in the 1980's.
How can we change those incentives?
Let's talk about that next time.
Thursday, September 27, 2007
Extreme Programming - In One Page
Now, folks, don't get me wrong. I am a big fan of Extreme Programming, but I do not think that XP is the "one true way" or the "one right way" to do software development. I do think that it pushed back against the "traditional" school of software development in the right direction, at the right time.
For it's time, XP was the contrarian consultant, when, where and how it was badly needed.
If you want the elevator speech to explain Extreme Programming, one place to start is the XP In One Page Poster(*).
The poster tries to cram a lot of ideas in a little space. If I had to recommend one single thing that offers the most value - that I would recommend that any commerical or business software team take a long hard look at - it would be the "design philosophy" section at the bottom left.
Seriously. When I saw this today, I printed it out, ran over that section with a yellow highlighter, wrote "READ THIS FIRST!" with an arrow at the top, and put it on my wall o' attention grabbing stuff. (Mostly cartoons with an occasional big, visible chart ...)
Regards,
--heusser
(*) - The poster comes to you thanks to Ron Jeffries, George Dinwidde, and a few other other folks, and dates back to 2002.
For it's time, XP was the contrarian consultant, when, where and how it was badly needed.
If you want the elevator speech to explain Extreme Programming, one place to start is the XP In One Page Poster(*).
The poster tries to cram a lot of ideas in a little space. If I had to recommend one single thing that offers the most value - that I would recommend that any commerical or business software team take a long hard look at - it would be the "design philosophy" section at the bottom left.
Seriously. When I saw this today, I printed it out, ran over that section with a yellow highlighter, wrote "READ THIS FIRST!" with an arrow at the top, and put it on my wall o' attention grabbing stuff. (Mostly cartoons with an occasional big, visible chart ...)
Regards,
--heusser
(*) - The poster comes to you thanks to Ron Jeffries, George Dinwidde, and a few other other folks, and dates back to 2002.
Wednesday, September 26, 2007
Software Fiction
Jon Bruce continues to improve as a writer. He's got an excellent series going on now called "Startup" - click here, scroll to the bottom of Startup, and read upwards.
In other news ...
I am also in the middle of a back-channel discussion with Ben Simo and Shrini Kulkarni about test frameworks.
This feeds off the idea in my earlier blog post that if your framework makes it hard to test, people won't use it.
Two of the common elements I see in test frameworks are:
(A) Lots of XML
(B) Tough-to-type Syntax
I've explained (A) before, but let me talk for a moment about B.
Often, people have a web application. To test it, they may use framework that drives the browser. The tester then writes test 'code' in a number of possible languages, often Java (Web Driver), Ruby (Watir), or Visual Basic (Quick Test Pro or WinRunner).
The probem is when these frameworks see everything as an object. Yes, that's a problem. Because instead of writing:
SendValueToTag('QTY','100');
PressButton('Submit');
my $val = GetValueAtTheTag('Grand Total');
ASSERT('Grand Total should be fifty bucks',$val, 50.00);
You have to write this:
my $bdy;
$bdy = Object("Browser").Tab(1).page('www.foo.com').html.body;
$body.form('form1').object('tag').label('QTY').setval(100);
$body.object('button').label('submit').push;
my $new_bdy;
$new_bdy = object("Browser").Tab(1).page('www.foo.com').html.body;
my $val;
$val = $new_body.form('form1').object('tag').label('Grand Total').getval();
ASSERT('Grand Total should be fifty bucks',$val, 50.00);
Ah!! Ahhh! Ahhh! My eyes! My eyes!
The sad thing is, I am not exaggerating by much.
So at any given shop, one of a few things happen:
(1) Someone tries to use the framework, and goes through so much pain that they give up.
(2) Someone puts an extreme amount of effort into learning the framework and is actually successful. We'll call him Joe. After that, Joe becomes the in-house expert on the tool. If Joe is assigned to test the software, the software will be tested with that framework. Otherwise, it will probably be tested manually.
(3) You write a custom piece of code that sits on top of the framework that eliminates all the fiddly-bit DOM mappings, so you can just call:
browser.getthetag('qty');
browser.setthetag('foo', 5);
(4) (Hopefully) You find a better framework.
I have quite a few colleagues who have had success with option three. Ruby/Watir, however, looks a lot like #4 - a non-goofy framework that allow you express complex test cases relatively easily, in a language that looks a lot more like English than other options, so it is self-documenting.
My prediction is that the big, slow, dumb tools will continue to dominate the mediocre "no one got fired by buying IBM" space, and smart people will continue to patch them to make them work.
However, if you want to try something completely different - consider learning Ruby, or least watching the Ruby "Switch" Video.
In other news ...
I am also in the middle of a back-channel discussion with Ben Simo and Shrini Kulkarni about test frameworks.
This feeds off the idea in my earlier blog post that if your framework makes it hard to test, people won't use it.
Two of the common elements I see in test frameworks are:
(A) Lots of XML
(B) Tough-to-type Syntax
I've explained (A) before, but let me talk for a moment about B.
Often, people have a web application. To test it, they may use framework that drives the browser. The tester then writes test 'code' in a number of possible languages, often Java (Web Driver), Ruby (Watir), or Visual Basic (Quick Test Pro or WinRunner).
The probem is when these frameworks see everything as an object. Yes, that's a problem. Because instead of writing:
SendValueToTag('QTY','100');
PressButton('Submit');
my $val = GetValueAtTheTag('Grand Total');
ASSERT('Grand Total should be fifty bucks',$val, 50.00);
You have to write this:
my $bdy;
$bdy = Object("Browser").Tab(1).page('www.foo.com').html.body;
$body.form('form1').object('tag').label('QTY').setval(100);
$body.object('button').label('submit').push;
my $new_bdy;
$new_bdy = object("Browser").Tab(1).page('www.foo.com').html.body;
my $val;
$val = $new_body.form('form1').object('tag').label('Grand Total').getval();
ASSERT('Grand Total should be fifty bucks',$val, 50.00);
Ah!! Ahhh! Ahhh! My eyes! My eyes!
The sad thing is, I am not exaggerating by much.
So at any given shop, one of a few things happen:
(1) Someone tries to use the framework, and goes through so much pain that they give up.
(2) Someone puts an extreme amount of effort into learning the framework and is actually successful. We'll call him Joe. After that, Joe becomes the in-house expert on the tool. If Joe is assigned to test the software, the software will be tested with that framework. Otherwise, it will probably be tested manually.
(3) You write a custom piece of code that sits on top of the framework that eliminates all the fiddly-bit DOM mappings, so you can just call:
browser.getthetag('qty');
browser.setthetag('foo', 5);
(4) (Hopefully) You find a better framework.
I have quite a few colleagues who have had success with option three. Ruby/Watir, however, looks a lot like #4 - a non-goofy framework that allow you express complex test cases relatively easily, in a language that looks a lot more like English than other options, so it is self-documenting.
My prediction is that the big, slow, dumb tools will continue to dominate the mediocre "no one got fired by buying IBM" space, and smart people will continue to patch them to make them work.
However, if you want to try something completely different - consider learning Ruby, or least watching the Ruby "Switch" Video.
Wednesday, September 19, 2007
Strategy Letter VI -
"Writing applications that work in all different browsers is a friggin’ nightmare. There is simply no alternative but to test exhaustively on Firefox, IE6, IE7, Safari, and Opera, and guess what? I don’t have time to test on Opera. Sucks to be Opera. Startup web browsers don’t stand a chance.
What’s going to happen? Well, you can try begging Microsoft and Firefox to be more compatible. Good luck with that. You can follow the p-code/Java model and build a little sandbox on top of the underlying system. But sandboxes are penalty boxes; they’re slow and they suck, which is why Java Applets are dead, dead, dead ..."
Taken from Joel Spolsky's Strategy Letter VI.
It'll take five minutes of your time and it will be time well spent.
What’s going to happen? Well, you can try begging Microsoft and Firefox to be more compatible. Good luck with that. You can follow the p-code/Java model and build a little sandbox on top of the underlying system. But sandboxes are penalty boxes; they’re slow and they suck, which is why Java Applets are dead, dead, dead ..."
Taken from Joel Spolsky's Strategy Letter VI.
It'll take five minutes of your time and it will be time well spent.
Tuesday, September 18, 2007
What's a "Test Framework"?
Shrini Kulkarni has been after me to define my terms; after all, I keep writing about "Test Frameworks" but I've never defined the term.
Wikipedia defines a framework as "A basic conceptual structure used to solve a complex issue. It also warns that "This very broad definition has allowed the term to be used as a buzzword."
When I use the term, I mean any support, infrastructure, tool or "scaffolding" designed to make testing easier, and (often) automated.
For example: Let's say you have a simple program that converts distance from Miles to Kilometers. The application is a windows application. Every time we make a change, we have a bunch of tests we want to run, yet we can only enter one value at a time, manually. Bummer.
Yet we could think of the software as two systems - the GUI, which "just" passes the data from the keyboard into a low-level function, and "just" prints the answer, and the conversion formula function.
If we could somehow separate these two, and get at the formula function programmatically. Imagine that the formula function is struck in a code library, which can be shared with many different programs.
Then we could write a "tester" program, which takes an input file full of input values and expected results. The "tester" program simply calls the library, compares the actual result to the expected, and prints out a success or failure message.
This is basically how I test a lot of my code, using a program called Test::More. You could call Test::More and it's friends (Test::Harness, and so on) a "framework."
We can go dig into the details and test at the developer understanding level, or bubble up and only test things that have meaning to the customer. One popular framework for these higher level, business logic tests is FIT/Fitnesse
Of course, there is more to the application and just the business logic. The GUI could accept the wrong characters (like letters), format the decimals incorrectly, fail to report errors, handle resizing badly, or a half dozen other things. Even with one "framework", we still have the problem of testing the GUI (not to be forgotten) and testing the two pieces put back together again - "Acceptance" testing, or, perhaps, "System" testing.
This "outer shell" testing can also be slow, painful, and expensive, so there are dozens of free/open or commercial testing frameworks that allow you to 'drive' the user interface of windows or a web browser. With the big commercial tools, people often find that they are writing the same repetitive code, over and over again, so they write libraries on top of the tool like Sifl.
The big dog web browser automation systems are Selenium and Watir.
Years ago (back when he was at Microsoft), Harry Robinson once told me that MS typically had two types of testers: Manual Testers, and Developers who like to write frameworks. The problem was that the Developers would write frameworks that no one was interested in using. His assertion (and mine as well) is that people who straddle the middle - who like to test, and like to write software to help them test faster, can be much for effective than people entrenched on either side.
Thus, you don't set out to write a framework - instead, you write a little code to help make testing easier, then you write it again, then you generalize. Over time, slowly, the framework emerges, like Gold refined through fire.
But that's just me talking. What do you think? (And, Shrini - did I answer your questions?)
Wikipedia defines a framework as "A basic conceptual structure used to solve a complex issue. It also warns that "This very broad definition has allowed the term to be used as a buzzword."
When I use the term, I mean any support, infrastructure, tool or "scaffolding" designed to make testing easier, and (often) automated.
For example: Let's say you have a simple program that converts distance from Miles to Kilometers. The application is a windows application. Every time we make a change, we have a bunch of tests we want to run, yet we can only enter one value at a time, manually. Bummer.
Yet we could think of the software as two systems - the GUI, which "just" passes the data from the keyboard into a low-level function, and "just" prints the answer, and the conversion formula function.
If we could somehow separate these two, and get at the formula function programmatically. Imagine that the formula function is struck in a code library, which can be shared with many different programs.
Then we could write a "tester" program, which takes an input file full of input values and expected results. The "tester" program simply calls the library, compares the actual result to the expected, and prints out a success or failure message.
This is basically how I test a lot of my code, using a program called Test::More. You could call Test::More and it's friends (Test::Harness, and so on) a "framework."
We can go dig into the details and test at the developer understanding level, or bubble up and only test things that have meaning to the customer. One popular framework for these higher level, business logic tests is FIT/Fitnesse
Of course, there is more to the application and just the business logic. The GUI could accept the wrong characters (like letters), format the decimals incorrectly, fail to report errors, handle resizing badly, or a half dozen other things. Even with one "framework", we still have the problem of testing the GUI (not to be forgotten) and testing the two pieces put back together again - "Acceptance" testing, or, perhaps, "System" testing.
This "outer shell" testing can also be slow, painful, and expensive, so there are dozens of free/open or commercial testing frameworks that allow you to 'drive' the user interface of windows or a web browser. With the big commercial tools, people often find that they are writing the same repetitive code, over and over again, so they write libraries on top of the tool like Sifl.
The big dog web browser automation systems are Selenium and Watir.
Years ago (back when he was at Microsoft), Harry Robinson once told me that MS typically had two types of testers: Manual Testers, and Developers who like to write frameworks. The problem was that the Developers would write frameworks that no one was interested in using. His assertion (and mine as well) is that people who straddle the middle - who like to test, and like to write software to help them test faster, can be much for effective than people entrenched on either side.
Thus, you don't set out to write a framework - instead, you write a little code to help make testing easier, then you write it again, then you generalize. Over time, slowly, the framework emerges, like Gold refined through fire.
But that's just me talking. What do you think? (And, Shrini - did I answer your questions?)
Monday, September 17, 2007
In the mean time ...
Here's a comic that had me laugh out loud, I thought you might enjoy.
The image is clean and appropriate for work(*). It's a little story about the difference between making a real difference, and, well, hyper-awesomeness marketing.
The cartoon is from BasicInstructions.com, which is a cartoon a day site.
--heusser
(*) - I do not and will not link to things that are inappropriate for work.
The image is clean and appropriate for work(*). It's a little story about the difference between making a real difference, and, well, hyper-awesomeness marketing.
The cartoon is from BasicInstructions.com, which is a cartoon a day site.
--heusser
(*) - I do not and will not link to things that are inappropriate for work.
Friday, September 14, 2007
Be Careful what you wish for ...
I haven't had much time for blogging lately. A few great, interesting, cool things that are swamping me right now -
- Jim Brosseau's Book, "Taking Ownership"is completely through the greenlight stage and moving to publication. Yaaay!
- I'm speaking at the Grand Rapid's Java User's Group on Tuesday the 18th of September. They literally sent out a call for speaker's last week, and I responded that I had a couple of lightning talks that I think might string together well for ten or fifteen minutes. So they asked me for an abstract for a full talk ... it should be interesting.
- I'm coaching a team of 4 and 5 year old children from Allegan AYSO Soccer. GO ALLEAN TEAM FIVE SILLY FROGS!
- I'm serving as a volunteer and a speaker for GLSEC this year.
More about GLSEC next post, but for the time being, let's just say I've been busy ...
- Jim Brosseau's Book, "Taking Ownership"is completely through the greenlight stage and moving to publication. Yaaay!
- I'm speaking at the Grand Rapid's Java User's Group on Tuesday the 18th of September. They literally sent out a call for speaker's last week, and I responded that I had a couple of lightning talks that I think might string together well for ten or fifteen minutes. So they asked me for an abstract for a full talk ... it should be interesting.
- I'm coaching a team of 4 and 5 year old children from Allegan AYSO Soccer. GO ALLEAN TEAM FIVE SILLY FROGS!
- I'm serving as a volunteer and a speaker for GLSEC this year.
More about GLSEC next post, but for the time being, let's just say I've been busy ...
Thursday, August 30, 2007
Interesting People at GTAC - I
Or: Test Eye for the Framework Guy
Douglas Sellers, The Inkster Guy gave a talk on writing a domain-specific language for testing. His idea is to write the test cases first in a language that looks like a English, then figure out how to write an interpreter for those test cases.
(Stereotype warning / warning / you’ve been warned ...)
In my mind, this is a huge part of the problem with test frameworks. Developers write test frameworks so that someone else ("those QA people") can write test cases.
The problem is, writing the test case in XML or java is just no fun. Really, it is no fun. It's so not fun that the devs don't want to do it. In my mind, this is a problem with process design – when you design a process for somebody else to do, the resulting process isn't very good, because you trade all the good and fun things away to get management-y things.
Here's my conclusion: If you want to write a test framework, first write some test cases, and keep tweaking them until they are fun to write – or at least easy. Then write the framework.
Then implement a real, non-trival test project in your framework. Yourself. Yes, you. The developer. If you can build a test suite and actually use it, then open-source and talk about your tool.
Another neat person I met at GTAC was Will Roden, author of the Software Inquisition blog. At lunch on the second day of the conference, will demoed SiFL for me. SiFL is a "foundation library" that will wrote on top of Quick Test Pro, in order to make writing test cases in QTP quick, easy, and, well, fun.
Basically, it's just a series of reusable functions that make driving a browser really really easy. The thing is, they are designed to be easy for the tester, and optimized for the tester. This is a real library that, when combined with QTP, looks just as useful to me (more useful) than some of the frameworks that we see projected on the big screen.
Why? Because it was developed of the tester, by the tester, for the tester. No XML test cases for you ...
UPDATE:I found a great little quote that explains the whole framework-authors-vs-testers issue:
Strathman's Law of Program Management: Nothing is so easy as the job you imagine someone else doing.
Douglas Sellers, The Inkster Guy gave a talk on writing a domain-specific language for testing. His idea is to write the test cases first in a language that looks like a English, then figure out how to write an interpreter for those test cases.
(Stereotype warning / warning / you’ve been warned ...)
In my mind, this is a huge part of the problem with test frameworks. Developers write test frameworks so that someone else ("those QA people") can write test cases.
The problem is, writing the test case in XML or java is just no fun. Really, it is no fun. It's so not fun that the devs don't want to do it. In my mind, this is a problem with process design – when you design a process for somebody else to do, the resulting process isn't very good, because you trade all the good and fun things away to get management-y things.
Here's my conclusion: If you want to write a test framework, first write some test cases, and keep tweaking them until they are fun to write – or at least easy. Then write the framework.
Then implement a real, non-trival test project in your framework. Yourself. Yes, you. The developer. If you can build a test suite and actually use it, then open-source and talk about your tool.
Another neat person I met at GTAC was Will Roden, author of the Software Inquisition blog. At lunch on the second day of the conference, will demoed SiFL for me. SiFL is a "foundation library" that will wrote on top of Quick Test Pro, in order to make writing test cases in QTP quick, easy, and, well, fun.
Basically, it's just a series of reusable functions that make driving a browser really really easy. The thing is, they are designed to be easy for the tester, and optimized for the tester. This is a real library that, when combined with QTP, looks just as useful to me (more useful) than some of the frameworks that we see projected on the big screen.
Why? Because it was developed of the tester, by the tester, for the tester. No XML test cases for you ...
UPDATE:I found a great little quote that explains the whole framework-authors-vs-testers issue:
Strathman's Law of Program Management: Nothing is so easy as the job you imagine someone else doing.
Wednesday, August 29, 2007
GTAC - Bonus Section #3:
Another speaker claimed that "The only Exhaustive Testing is when the tester is exhausted" - we agree. By Exhausive Frameworks, I mean the ability to do everything. The problem is that by being _able_ to do everything, we use abstractions that make it hard to do any specific thing.
Sometimes, a quick-and-dirty, only-works-for-this-app, only-works-for-this-screen automation provides the highest ROI.
My approach is rapid test automation - but do what works for you.
(The concept is similar to James Bach's Rapid Software Testing - Used with permission)
Monday, August 27, 2007
GTAC - Bonus Section #2:
We have seen a wonderfully isolated, encapsulated, poly-morphed, design-patterned, auto-tested, mocked app ...
- That could have been written procedurally in 500 Source Lines of Code
- But now consists of 10 classes and 20 files spread over 4000 SLOC
Using mock tools results in software with more code (Pettichord, "Homebrew Test Automation", 2003)
If you can keep everything in your head, you don't need radical separation. Radical separation is tool to use when your components get big and unmanageable, and results in a *lot* of components, with each individual component being smaller than you had to start.
Friday, August 24, 2007
GTAC - Bonus Section #1
Our GTAC Talk evolved over an extended period, and had a lot more material than the time allowed. So, just for you Creative Chaos readers, I'm going to blog our bonus section.
Comments on the Slide Above:
If you want to mock out ('trust') every domain - including database on db intensive apps, the filesystem on file intensive apps, and the internet on web-service apps - tread lightly.
Steve Freeman, co-author of Mock Roles not Objects, points out that his main use of mocks is to help him discover roles and relationships between objects in his domain code, rather than speeding up external dependencies. He tends to write little integration tests to check things like that the database is hooked up correctly. (Editors note: Steve actually wrote that paragraph.)
So he's saying use mocks for design - de-emphasizing the use for testing. Mocking out the boundaries doesn't help you with that design decision, because you are not designing the hard boundary.
So don't do this unless you're testing some otherwise untestable requirement - like handling exceptions when the filesystem is full up, when it is unrealistic to fill up the real file system. (We have a slide on this on the YouTube Talk called "Mocks As Simulators")
If you must mock out the core system, have a balanced breakfast, cover your tests somewhere else, or run the "real" tests overnight.
Thursday, August 23, 2007
Why GTAC is different
As I write this, it’s 3:36PM on August 23rd, and I am sitting at the New York Google Office, just after co-presenting a talk on interaction-based testing.
I am sick. Exhausted. Drained. Barely able to give the follow-up speakers the attention they deserve – but I’m trying, they did it for me.
And, to borrow a phrase, this is also "The Awesome."
GTAC is fundamentally different . There is one track (no "concurrent sessions"), so everyone has an identical frame of reference. Attendance is capped at 150, so you have a real chance to meet everyone. The conference is run at a financial loss by Google as a gift to the community – so instead of attracting paying customers, Google can be selective about attendees. Because it is capped at 150, they can be very selective.
Moreover, this is no junket in Nantucket. From the moment we arrived until it was time to sleep, Google has had events. The "business day" for the conference days run 7:30AM to 7:30PM – following by mingling, followed by a reception, which doesn’t end until 10:00PM. If you came to New York to see the Statue of Liberty (or to appear on the Today Show) you’re probably out of luck – but if you want to talk about software testing, c’mon, sit down, here's a salt shaker, let's test.
Finally, the conference ends on a Friday, which means if people want to fly home, they have to do it on Saturday. Again, people who don't really care but want a junket are not very likely give up "their" Saturday for travel.
Bottom line: GTAC is the most impressive test automation conference I know of, period. It's been an honor to speak, it was fun, and I'm glad that my speaking portion is done and I can enjoy the rest of the conference.
By the way, if you are into speaking, GTAC is also one of the best audiences that I have ever presented for before. Forgiving, interested, actively listening, thinking critically, and consisting of true peers. I have to say, this is just about everything I could ask for in an audience – and that makes a huge difference. (Come to think of it, the only conference I've had a better audience with was the Indianapolis QA Association, who have all that and mid-west sense of humor ...)
UPDATE: If you couldn't be in New York city RIGHT NOW, you can watch the talk on Interaction Based Testing by Heusser and McMillan.
I am sick. Exhausted. Drained. Barely able to give the follow-up speakers the attention they deserve – but I’m trying, they did it for me.
And, to borrow a phrase, this is also "The Awesome."
GTAC is fundamentally different . There is one track (no "concurrent sessions"), so everyone has an identical frame of reference. Attendance is capped at 150, so you have a real chance to meet everyone. The conference is run at a financial loss by Google as a gift to the community – so instead of attracting paying customers, Google can be selective about attendees. Because it is capped at 150, they can be very selective.
Moreover, this is no junket in Nantucket. From the moment we arrived until it was time to sleep, Google has had events. The "business day" for the conference days run 7:30AM to 7:30PM – following by mingling, followed by a reception, which doesn’t end until 10:00PM. If you came to New York to see the Statue of Liberty (or to appear on the Today Show) you’re probably out of luck – but if you want to talk about software testing, c’mon, sit down, here's a salt shaker, let's test.
Finally, the conference ends on a Friday, which means if people want to fly home, they have to do it on Saturday. Again, people who don't really care but want a junket are not very likely give up "their" Saturday for travel.
Bottom line: GTAC is the most impressive test automation conference I know of, period. It's been an honor to speak, it was fun, and I'm glad that my speaking portion is done and I can enjoy the rest of the conference.
By the way, if you are into speaking, GTAC is also one of the best audiences that I have ever presented for before. Forgiving, interested, actively listening, thinking critically, and consisting of true peers. I have to say, this is just about everything I could ask for in an audience – and that makes a huge difference. (Come to think of it, the only conference I've had a better audience with was the Indianapolis QA Association, who have all that and mid-west sense of humor ...)
UPDATE: If you couldn't be in New York city RIGHT NOW, you can watch the talk on Interaction Based Testing by Heusser and McMillan.
Monday, August 20, 2007
... and Steve Poling Replies
This went out over SW-IMPROVE, my software discussion list. I will post my reply tomorrow ...
I heartily agree that the first two items are anathema to effective software development.
However, I see (with significant disclaimers) some benefit in the last two. When you say "hand off to the next person in the chain" that premises a serial relationship between workers who, assembly-line fashion, perform tasks that incrementally transform raw materials into some manufactured good. This might work if software were manufactured goods. It's a remarkably stupid way to approach software development. Whereas rivets and holes and the like in a manufactured good is standardized, and subject to no uncertainty or renegotiation at the point of production. However, if you and I work together between two parts of a software system, we'll may start with a rough interface between our stuff, but we'll be working out details between us.
I assert the value of the "artifacts that are intermediate work products" to record the AS-BUILT connections and theory of operation of each piece. A few reasons come to mind: some poor slob is going to have to maintain this mess. It's nice to have a written starting point to help that guy get into the "Zen" of the code. Especially, when that poor slob is either of us six months after our initial development push.
Second, suppose one of us gets hit by a truck or leaves the company. The only true reality of a software solution is the object code. But when you look at the source code, you easily lose track of the forest for the trees.
After a software system exists for a few years someone gets it in his head to "rewrite this mess, and get it right this time." I have fingerprints on three generations of software that solves similar/overlapping problems. Would you believe I've seen similar bug reports issued on each? Each time we ran into the same issues that are inherent to the problem being solved. What often looks like kruft in the code are bug-fixes. Those krufty looking bug-fixes reflect difficult parts of the problem being solved, or limitations of the initial design. The fact that they're bug-fixes indicates that the requirements and design were inadequate at these points. I have often heard "build the new one just like the old one, but..." If I can meaningfully study the old one's requirements and design while aware of the bugs of that implementation, I can write more complete requirements and design.
It's my opinion that the most enduring aspect of any software development system is the requirements discovery. I have picked up the requirements for a DOS system written in the 1980s and used them to directly identify test cases for three-generations-later rewrite. Thus when we discovered that we had no decent requirements statement for a project (written in the early 1990s.), I told the woman tasked with this, "Managers come and go. (I looked at my boss.) Programmers come and go. (I pointed to myself.) But the single most important thing that you will do in your entire tenure with this company will be this requirements statement."
This happened in the context of our having an existing code base that has been making the company money for over 15 years. Making the new one exactly like the old one would not work, because in so doing, we found differences that reflected undeniable errors I made in the early '90s. Back then, me and a fellow who left the company sweat bullets on the requirements coming up with "a nod and a handshake" agreements at each point. We didn't write it down; I just coded it that way. Thus, I had nothing to go by, except code known to have bugs and fading memories, when I did the rewrite this spring. My fading memories are a sucky basis for establishing requirements. Even less would be available to the company had I died of cancer five years ago. Conversely, if we had documented what we agreed to back then, we'd have saved the company a LOT of rework when the new implementation introduced variations where unstated business rules were ignored.
With this in mind, I happily assert, "if you didn't */record/* it, it didn't happen." I know you like to satisfy documentation requirements "creatively" so I'll gladly accept VHS (or Betamax) tapes of conversations between principals where requirements are discovered and concomitant decisions are made. Similarly, I'm cool with the meeting minutes of design meetings and reviews.
I heartily agree that the first two items are anathema to effective software development.
However, I see (with significant disclaimers) some benefit in the last two. When you say "hand off to the next person in the chain" that premises a serial relationship between workers who, assembly-line fashion, perform tasks that incrementally transform raw materials into some manufactured good. This might work if software were manufactured goods. It's a remarkably stupid way to approach software development. Whereas rivets and holes and the like in a manufactured good is standardized, and subject to no uncertainty or renegotiation at the point of production. However, if you and I work together between two parts of a software system, we'll may start with a rough interface between our stuff, but we'll be working out details between us.
I assert the value of the "artifacts that are intermediate work products" to record the AS-BUILT connections and theory of operation of each piece. A few reasons come to mind: some poor slob is going to have to maintain this mess. It's nice to have a written starting point to help that guy get into the "Zen" of the code. Especially, when that poor slob is either of us six months after our initial development push.
Second, suppose one of us gets hit by a truck or leaves the company. The only true reality of a software solution is the object code. But when you look at the source code, you easily lose track of the forest for the trees.
After a software system exists for a few years someone gets it in his head to "rewrite this mess, and get it right this time." I have fingerprints on three generations of software that solves similar/overlapping problems. Would you believe I've seen similar bug reports issued on each? Each time we ran into the same issues that are inherent to the problem being solved. What often looks like kruft in the code are bug-fixes. Those krufty looking bug-fixes reflect difficult parts of the problem being solved, or limitations of the initial design. The fact that they're bug-fixes indicates that the requirements and design were inadequate at these points. I have often heard "build the new one just like the old one, but..." If I can meaningfully study the old one's requirements and design while aware of the bugs of that implementation, I can write more complete requirements and design.
It's my opinion that the most enduring aspect of any software development system is the requirements discovery. I have picked up the requirements for a DOS system written in the 1980s and used them to directly identify test cases for three-generations-later rewrite. Thus when we discovered that we had no decent requirements statement for a project (written in the early 1990s.), I told the woman tasked with this, "Managers come and go. (I looked at my boss.) Programmers come and go. (I pointed to myself.) But the single most important thing that you will do in your entire tenure with this company will be this requirements statement."
This happened in the context of our having an existing code base that has been making the company money for over 15 years. Making the new one exactly like the old one would not work, because in so doing, we found differences that reflected undeniable errors I made in the early '90s. Back then, me and a fellow who left the company sweat bullets on the requirements coming up with "a nod and a handshake" agreements at each point. We didn't write it down; I just coded it that way. Thus, I had nothing to go by, except code known to have bugs and fading memories, when I did the rewrite this spring. My fading memories are a sucky basis for establishing requirements. Even less would be available to the company had I died of cancer five years ago. Conversely, if we had documented what we agreed to back then, we'd have saved the company a LOT of rework when the new implementation introduced variations where unstated business rules were ignored.
With this in mind, I happily assert, "if you didn't */record/* it, it didn't happen." I know you like to satisfy documentation requirements "creatively" so I'll gladly accept VHS (or Betamax) tapes of conversations between principals where requirements are discovered and concomitant decisions are made. Similarly, I'm cool with the meeting minutes of design meetings and reviews.
Friday, August 17, 2007
Interlude
This looked so interesting I had to post it:
"Myths in Software Engineering" -
A few of my favorite myths:
- Software development consists of discrete, separate activities that can be organized into phases
- The best way to make the *overall* process effective is to have efficient specialists for each phase
- These specialists should produce artifacts that are "intermediate work products", to be handed off to the next person in the chain
... and my personal favorite:
- If you didn't write it down, it didn't happen.
If you enjoy this kind of thing, there is a great little book called "Facts and Fallacies of Software Engineering" that goes into much more detail. If I recall correctly, fallacy #1 is "Without metrics, you can't manage."
What are some of your favorite myths in software engineering?
"Myths in Software Engineering" -
A few of my favorite myths:
- Software development consists of discrete, separate activities that can be organized into phases
- The best way to make the *overall* process effective is to have efficient specialists for each phase
- These specialists should produce artifacts that are "intermediate work products", to be handed off to the next person in the chain
... and my personal favorite:
- If you didn't write it down, it didn't happen.
If you enjoy this kind of thing, there is a great little book called "Facts and Fallacies of Software Engineering" that goes into much more detail. If I recall correctly, fallacy #1 is "Without metrics, you can't manage."
What are some of your favorite myths in software engineering?
Thursday, August 16, 2007
Testing the Stapler
To answer the Stapler question, Mike Kelly referred me to this post where he had 144 tests. Yes, 144 tests, and, offhand, they look to be real, valid tests. To get there, Mike starts by looking up the manufacturer of the stapler, getting the specification, and seeing if the physical object conforms to the spec.
David Drake took a different tactic: He started with the Egg, and asked what purpose I wanted to use the Egg for.
A different way to say this would be "What are the requirements?" which is a solid way to start the discussion. For example, Mike could have tested the stapler to the Stanley Bostich Specification, but our requirements could be for a industrial-strength heavy duty stapler. In other words, we bought something Commercial, Off the Shelf that wasn't fit for purpose.
That might not be quite fair to Mike, as I am sure he asked about fitness for purpose before starting his list - but I would be remiss if I did not mention it.
To do this at a conference, I would probably have a thin, pointed knife sitting next to a stack of envelopes, and say something like "Take your pick" between the egg, stapler, shalt shaker, and knife. When picking up the knife, by assumption is that more people would lick the envelopes and try to open them, and few would ask me what are the requirements.
At which point, I reveal that I'm looking for a butter knife.
You may say "No fair! That's misleading!!"
To which I say, hey, it's a simulation.
If some says that it's an unrealistic simulation, I would ask:
'Have you ever actually tried to build software in corporate America?'
:-)
More importantly, I think the response to the challenge is important. Both Mike and David Drake impressed me. I've seen a spectrum of answers and put them in general categories. More about that tomorrow.
PS: If you've got a conference and would like to see more of this thing, let me know. Otherwise, I'm thinking of setting up shop in the hallway at a few places next year ...
David Drake took a different tactic: He started with the Egg, and asked what purpose I wanted to use the Egg for.
A different way to say this would be "What are the requirements?" which is a solid way to start the discussion. For example, Mike could have tested the stapler to the Stanley Bostich Specification, but our requirements could be for a industrial-strength heavy duty stapler. In other words, we bought something Commercial, Off the Shelf that wasn't fit for purpose.
That might not be quite fair to Mike, as I am sure he asked about fitness for purpose before starting his list - but I would be remiss if I did not mention it.
To do this at a conference, I would probably have a thin, pointed knife sitting next to a stack of envelopes, and say something like "Take your pick" between the egg, stapler, shalt shaker, and knife. When picking up the knife, by assumption is that more people would lick the envelopes and try to open them, and few would ask me what are the requirements.
At which point, I reveal that I'm looking for a butter knife.
You may say "No fair! That's misleading!!"
To which I say, hey, it's a simulation.
If some says that it's an unrealistic simulation, I would ask:
'Have you ever actually tried to build software in corporate America?'
:-)
More importantly, I think the response to the challenge is important. Both Mike and David Drake impressed me. I've seen a spectrum of answers and put them in general categories. More about that tomorrow.
PS: If you've got a conference and would like to see more of this thing, let me know. Otherwise, I'm thinking of setting up shop in the hallway at a few places next year ...
Wednesday, August 15, 2007
... And Carry a Big Stick
Yesterday I copied over my iTunes Library as well as some key conference CD notes. Big Mistake; I'm up to 2GB and haven't started yet. :-)
Both James Bach and Jon Kohl Recommended Portable Apps, a complete, 'lite' software suite designed for a USB key. The programs are both small and can run by double-clicking - without installing, without needing write access to C:\, without having to be an administrator.
Portable Apps includes Mozilla FireFox, Putty (a telnet and SSH client), WinSCP3 (A Secure file copy utility), and Open Office. I'm not 100% happy with these options - just about every system allready has a web browser (even if it's a bad one), and Microsoft Office is just about ubiquitous. So saying "Look, I bought a USB key so I can use (1) May Favorite Apps in case (2) Equivalent Apps aren't available" doesn't sound so great when (1) The Differences Ain't Much and (2) Equivalenta Apps are pretty much everywhere.
I'll keep looking. For the time being I am considering taking those conference materials off the keyfob. In the mean time, it's nice to have a home for my passwords, writings/presentations, and a backup of my website. My next move is probably to move to an encrypted password manager, and surprise, there is one available for portable apps. (Can anyone recommend a good, free, windows automation tool besides Tasker?) I may also download the encryption/backup software that comes with portable apps.
As for the stapler, egg, and salt shaker - more tomorrow ...
UPDATE: The 'standard install' of portable apps seems to have a lot of applications I am not really interested in, like games and open office. Also, I'm not quite sure how to un-install the things that are installed. I'll keep playing around; more reviews to come.
Both James Bach and Jon Kohl Recommended Portable Apps, a complete, 'lite' software suite designed for a USB key. The programs are both small and can run by double-clicking - without installing, without needing write access to C:\, without having to be an administrator.
Portable Apps includes Mozilla FireFox, Putty (a telnet and SSH client), WinSCP3 (A Secure file copy utility), and Open Office. I'm not 100% happy with these options - just about every system allready has a web browser (even if it's a bad one), and Microsoft Office is just about ubiquitous. So saying "Look, I bought a USB key so I can use (1) May Favorite Apps in case (2) Equivalent Apps aren't available" doesn't sound so great when (1) The Differences Ain't Much and (2) Equivalenta Apps are pretty much everywhere.
I'll keep looking. For the time being I am considering taking those conference materials off the keyfob. In the mean time, it's nice to have a home for my passwords, writings/presentations, and a backup of my website. My next move is probably to move to an encrypted password manager, and surprise, there is one available for portable apps. (Can anyone recommend a good, free, windows automation tool besides Tasker?) I may also download the encryption/backup software that comes with portable apps.
As for the stapler, egg, and salt shaker - more tomorrow ...
UPDATE: The 'standard install' of portable apps seems to have a lot of applications I am not really interested in, like games and open office. Also, I'm not quite sure how to un-install the things that are installed. I'll keep playing around; more reviews to come.
Monday, August 13, 2007
Look Ma, the works of Shakespeare on a stick!
At STAREast this year, James Bach had a "portable testing training kit." Roughly the size of a small purse (or, er ... a "man's bag"), I'm sure it had various test challenges in it.
I didn't get the chance to see it, but if pressed to make my own, it would be things like a stapler, a salt shaker, and a hard boiled egg. I would hand these to my student and say something like:
"Test This"
(This also, by the way, is my general litmus test for a tester. When it comes to actually talking about the real work of testing a thing - If the person is utterly confused, or smiles genially and changes the subject, or asks me what _I_ think - well, that tells me something. If they roll up their sleeves and dive in, that tells me something ...)
But I've been thinking about a different testing toolkit lately.
I keep my source materials, some tools, and a copy of my website on a USB Drive. The drive has recently been pushing its limits, so I just bought a replacement - a 4 GB drive, for thirty bucks from BestBuy.
4 GB is a lot of space. A lot.
This got me thinking about what I do when I come into a new company. The first thing I do is download a bunch of free tools - Putty, WinSCP3, Tasker, SnagIt, TextPad, ActivePerl, Dia, GVim, Audacity. I also have a bunch of PDF and Word documents that I read and re-read every year. With 4GB of space, I could put all of those on a memory stick, and more.
These are testing tools, but there are also security testing tools - and tutorials - that would fit easily on the stick. Snort, Crack, intrusion detection, SQL injection, and other tools come to mind.
With 4GB of space, I could put all these tools and more on a stick. If they were good enough, I could sell the stick as a value-added tool or, more likely, just have an interesting howto list on a website. Yes, getting a booth at DefCon and selling security tools has occurred to me, but for the time being, I'll keep my hat white, thank you very much.
So here's my two questions, take your pick:
1) If you developed a testing tool on a stick, what free (or cheap-ware) tools would you include? What is missing from my list? What entire categories are missing? If there a different kind of stick to develop? Yes, I could do a developer stick with apache, php, mysql, but most of those come with Linux Anyway.
2) If you don't like that, here's another one: The stapler, the hard-boiled egg, or the salt shaker. I've asked you to test it. What do you do?
I didn't get the chance to see it, but if pressed to make my own, it would be things like a stapler, a salt shaker, and a hard boiled egg. I would hand these to my student and say something like:
"Test This"
(This also, by the way, is my general litmus test for a tester. When it comes to actually talking about the real work of testing a thing - If the person is utterly confused, or smiles genially and changes the subject, or asks me what _I_ think - well, that tells me something. If they roll up their sleeves and dive in, that tells me something ...)
But I've been thinking about a different testing toolkit lately.
I keep my source materials, some tools, and a copy of my website on a USB Drive. The drive has recently been pushing its limits, so I just bought a replacement - a 4 GB drive, for thirty bucks from BestBuy.
4 GB is a lot of space. A lot.
This got me thinking about what I do when I come into a new company. The first thing I do is download a bunch of free tools - Putty, WinSCP3, Tasker, SnagIt, TextPad, ActivePerl, Dia, GVim, Audacity. I also have a bunch of PDF and Word documents that I read and re-read every year. With 4GB of space, I could put all of those on a memory stick, and more.
These are testing tools, but there are also security testing tools - and tutorials - that would fit easily on the stick. Snort, Crack, intrusion detection, SQL injection, and other tools come to mind.
With 4GB of space, I could put all these tools and more on a stick. If they were good enough, I could sell the stick as a value-added tool or, more likely, just have an interesting howto list on a website. Yes, getting a booth at DefCon and selling security tools has occurred to me, but for the time being, I'll keep my hat white, thank you very much.
So here's my two questions, take your pick:
1) If you developed a testing tool on a stick, what free (or cheap-ware) tools would you include? What is missing from my list? What entire categories are missing? If there a different kind of stick to develop? Yes, I could do a developer stick with apache, php, mysql, but most of those come with Linux Anyway.
2) If you don't like that, here's another one: The stapler, the hard-boiled egg, or the salt shaker. I've asked you to test it. What do you do?
Subscribe to:
Posts (Atom)