Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Sunday, December 30, 2007

Technical Debt - VI

After a brief hiatus, I would like to return to the subject of technical debt.

First off, I don't think technical debt is a question of "Yes or No", but instead a question of "How Much." After all, everyone loves the company where the boss gives them all the time they need to implement the "right" solution without any pressure ... right until it goes out of business.

In that way, technical debt is not like real debt. In the real world, debt carries interest, and the best financial choice is to completely eliminate it. After all, ten bucks in the bank with no debt beats $10,000 in savings and $9,995 in loans - the interest on the loan will accumulate faster than cash. (Or, to quote Steve Poling "Debt is bad. Investments are good.")

But I believe the metaphor is generally solid. It is a concept that people can understand. It involves better and worse choices - the worse choices gives you an instant gratification (like buying that shiny new iPod on Credit) and yet hurts you in the long term. Unless you work really hard to check your financial statements and build some sort of family budget model, the cost of eating out a bit too much, concert tickets, and recurring monthly payments is really hard to measure. ("It's only $20 a month" = $240 a year. And exactly how many of those do we have, anyway? Land Phone, Two Cell Phones, Internet, Cable Tv ...)

Just like real debt, technical debt is an addiction cycle. Feeding the addition - buying something, or making a quick hack - can provide gratification in the short term - but make the long term worse. So you feel bad, and to feel better, you go to the mall. (Or, to hit the next impossible deadline, make another quick hack ...)

I had a friend in college who paid tuition with a credit card. When she got a statement asking for her monthly payment - I kid you not - she went to the ATM and withdrew money from the same card, deposited it in the bank, and sent a check to the credit card company.

Ho. Ly. Cow. Now, if we do a quick spreadsheet model of that we see ... wow. All she needed to change her behavior was the spreadsheet and a better idea.

Of course, Financial Debt is not the only form of addiction. Another common compulsion is overeating - which I just discussed in a recent post. Jerry Weinberg covers a systems analysis of overeating in his book - Quality Software Management, Volume I.

How do you get out of the weight gain trap? Surprise, surprise, you make the hidden trade offs visible, create an environment where successes are celebrated and set-backs are supported - and provide options. For example, at weight watchers, they have a weekly weight-in and provide healthy alternatives; carrots and celery instead of chocolate frosted sugar bombs and Mountain Dew. Now pounds lost ain't a great metric; muscle weighs more than fat, and it is possible to shed inches off your waist but maintain your weight. As I understand it, Weight Watches does take a holistic approach, and does measure multiple things - including calories.

Weight Loss. Finance. Human motivations. Metrics. We've got a lot more to cover in this area.

Is it time to start thinking about a peer workshop?

Friday, December 28, 2007

Lighting Talks Web Page Up -

I am now officially in recruiting mode for lightning talks to ST&PCon, and have added a web page about lightning talks - here.

Thursday, December 13, 2007

Metrics

I'm overweight.

That means: 210 Lbs, 5'11". Lots of driving, airplanes, lots of coffee, lots of typing (coding, testing, writing), and raising kids will do that to you.

Yes, I coach soccer, which is an hour of light exercise a few times a month in-season.

I suppose I could look at someone else more overweight and say "hey, I'm not that bad off."

I could make a New Year's commitment to exercise and eat better; but I gave that one up this year in May for STAREast and never made it back.

The problems? First, Over-eating isn't *visible* to me - or to anyone else - except in a gradual form that is hard to notice. I don't have much energy and my clothes don't fit ... but I haven't had much energy in awhile, and I can always buy larger clothes. So I am trapped in an addiction cycle; I feel bad, so I eat, and feel better for a short while, but worse in the long term. So, the next day, I feel bad, so ...

Second, exercise is not convenient. Especially in the winter.

Now, If I could just make diet and exercise something that was visible to my friends, family and colleagues - something I could bask in glory for success, and something they could hound and decry me for failure. Then I might have a chance to break this addiction cycle. I think the key is to make it public.

So, here are a few things I'm going to do:

1) I purchased an elliptical trainer, so "It's too cold" is not a good excuse.

2) I created an account on Traineo.com.

Traineo is a metric manic's dream website.

You enter your weight as often as you check it, and it creates pretty graphs.

You enter your amount of exercise, and it creates pretty graphs - even calculating the calories you burned based on the type of exercise, time you spend, and intensity.

You can create a goal, and it will show how far you are from that goal, and how much time you have remaining.

And you can create custom metrics. Here's my site.

Here's my basic strategy:

If I work out three to four times a week on the elliptical, that should be enough to maintain, but not lose, weight.

So I want to do something else to lose the weight. How about give up Mountain Dew during the work week? A 20 Oz, twice a day, five days a week, that's 200 Oz less Mountain Dew.

I also often eat junk for breakfast. So I created a score for my breakfast eating; 0 is Cheerios or fruit; 5 is super sized McDonalds.


What this has to do with software development

I've been using traineo for four days now. Just four days into the system, I found that I was eating candy bars and other snacks.

That doesn't show up in the metrics!

So the metrics have some value, but are imperfect. Once you realize how to game them, it's pretty easy to abuse them and remove the value, if not make them downright misleading and harmful.

Does that sound familiar?

Over the holidays, I may not have time to blog, but I'll try to keep the traineo site up. I believe there will be additional insights into software that we can mine from it.

And, if you'd like to encourage me, please feel free to check out the site and see how I'm doin'. There is even a role called a traineo "motivator" where traineo emails you my stats weekly. If we've actually met in real life, and you're interested in being a motivator, let me know.

--heusser
PS - If you can't see the tie-in between technical debt, metrics, and weight yet - don't worry, it's coming ...

Greased Lightning (Talks)


Lightning Talks
are ten five-minutes talks in a sixty-minute slot. As a speaker, I enjoy them because they sharpen my mind. There's no time for an introductory joke, no long agenda, no 15 minutes of "body" with 15 minutes of Q&A at the end. Nothing to hide behind. You get to make one point, make it well, and get off the stage. If you're full of it, the audience knows pretty quick. Then again, if you do a bad job, it's over quickly.

As a learner, I enjoy lightning talks for, well ... pretty much the same reasons. Plus, there's the chance to learn five or six things in a one-hour slot. Things that must be easily defensible and easy to implement, because they have to be explained in five minutes.

I will be moderating lightning talks in San Mateo, California in April for the ST&P Conference. That means I need speakers - and that means, (maybe), you! I'm afraid I cannot offer you a conference discount or freedbies; you'll have to speak for it's own benefit. But, if you want to try speaking by dipping your toe in the pool, or you are speaking at the conference but have the one odd thing in the corner that's itching you - please, drop me a line.

Note: I am using the term with written permission from Mark Jason Dominus, who introduced lightning talks at YAPC in 2001. If you are looking for more information on lightning talks or ideas, here are a few web pages:

About Lightning Talks - Dominus
Giving Lightning Talks - Mark Fowler

Here's an interesting lightning talk on YouTube: Sellenium Vs. Web Driver

Again, I'd love to hear from you. If you're interested in giving at lightning talk at ST&P, please, send me an email.

Tuesday, December 11, 2007

Testing Philosophy II -

About every four months, Shrini Kulkarni convinces me to drop the term "Test Automation" from my vocabulary. After all, testing is a creative, thinking process. We can automate some steps of what we are doing, speeding up repetitive work - but those don't turn out to be the investigative, critical thinking steps(*).

Still, I want to use a word to describe when I use automation to go faster. I use awkward terms like "automating the repetitive steps" or "build verification tests" for a few months, until I end up calling it test automation. Then Shrini emails me, and the cycle repeats.

Since I am on the down-stroke of one of those cycles, I thought it would be appropriate to link to an old paper of Brian Marick's:

I want to automate as many tests as I can. I’m not comfortable running a test only once. What if a programmer then changes the code and introduces a bug? What if I don’t catch that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?

Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be cost effective. It took me a long time, but I finally realized that I was over-automating, that only some of the tests I created should be automated. Some of the tests I was automating not only did not find bugs when they were rerun, they had no significant prospect of doing so. Automating them was not a rational decision.

The question, then, is how to make a rational decision. When I take a job as a contract tester, I typically design a series of tests for some product feature. For each of them, I need to decide whether that particular test should be automated. This paper describes how I think about the trade offs.


The paper is "When Should A Test Be Automated?" - and it is available on the web.

Now, that isn't exactly my philosophy, but I think it's good readin'.

Speaking of good readin'; if you are doing test automation by writing code in a true programming language to hit a GUI, you might enjoy this classic by Michael Hunter. (If you are not, it's still an interesting read, but everything after about page 12 is very specific about writing code libraries for application 'driving'cripting.)

And now, Shrini, I'm back on the up cycle. Really. I promise.


--heusser
(*) - It doesn't help that a lot of "test automation", especially in the 1990's, were snake oil products, designed by marketeers, to show how testing could be "sped up" with massive ROI numbers. I can't tell you how many boxes of GUI test automation software I have seen sitting, unused, on shelves. Lots - and even more than that are the stories I've heard from colleagues.

When I talk about test automation, that's not what I mean - see philosophy I for a better description.

Monday, December 10, 2007

Ruby Vs. Python for Testing

There is an interesting discussion going on on the Agile-Testing Yahoo Group right now, about the differences between Python and Ruby.

It is oddly reminiscent of conversation I've had a dozen times recently - about the advantages of Ruby Vs. Perl. All of the old-school computer scientists are saying that there isn't much difference, while the newbies who don't know Python (or Perl) believe it's the Cat's Meow and the New New Thing.

I just put out a post with my two cents, which I will repeat here:


I was talking with a colleague about this and he pointed out that in Ruby you don't need semi colons at the end of lines or parenthesis around function calls.

So, instead of this kind of thing (this example is in perl):

CookMeal("Eggs", "Ham", "Bacon");
ServitUp();

You can write this:

CookMeal Eggs, Ham, Bacon
ServitUp

---> To a computer scientist, this looks like BASIC - junk made up for 6th graders to learn programming, not real computer science. We cry out "Give me your strong indentation requirements and clear end of line parsers! Give me Python or Perl or C++!"

Yet - to a customer who is defining acceptance tests, the second example looks a lot more like the English language and less like a magical incantation.

And that, I honestly believe, is the major reason to that DSLs for customer acceptance testing are more popular in ruby than in python.



--heusser
Note - To get rid of the quotes, you'd have to define an enum type, which you 'could' do in any language ...

Wednesday, December 05, 2007

My theory of software testing - I

What's the right mix of exploratory testing, "planned" manual testing, and test automation?

My short answer is "it depends." Now, you don't need to point out that "it depends" is non-helpful - I realize that - and I am going to try to go beyond it.

The reason I say "it depends" is that it depends on the type of problem you have. So one thing I can do to go farther is to list a bunch of possible problems, along with kinds of testing I have seen that fit.

1) The build verification test, or BVT.
These are tests you run immediately after every build to make sure the software isn't completely hosed. In a typical windows app, this is - Install / File New / Add some stuff / File Save / Quit / Re Launch / File Open / The Screen should look like THIS. You add more tests over time, but the point is - if these tests fail, it was probably one big massive regression error, any additional testing information is suspect. The test team will work on other projects until the devs can get the BVT to pass.

BVT tests are incredibly boring, and often worth automating.

2) The Fahrenheit-To-Celsius Conversion test.
Sure, you could have a human being test a hundred examples manually, but if you have a spreadsheet that you can use as a source of Truth, why not write a program that loops through all values from -10,000 to 10,000, by increments of 0.001, calls the function, does the math in the spreadsheet, and compares the two? Note that this does not provide information about Boundaries, which may be best explored - but it can help you build some confidence about the happy middle.

3) Model-Based Testing.
Similar to #2, if the application is simple enough you can create a model of how the software should behave, then write tests to take "random walks" through the software. (Ben Simo has several good blog posts on this topic, including here.)

Despite the appeal of Model-Based Testing, it requires someone with true, deep testing expertise, true development expertise, modeling skill, and, usually, a gui-poking tool. So, despite all it's promise, the actual adoption of model-based testing has been ... less than stellar.

4) Unit Testing.
This is as simple as "low-level code to test low-level code" - often much further down in the bowels than a manual tester would want to go, it provides the kind of "safety net" test automation suite makes a developer comfortable refactoring code. And devs need to do this; otherwise, maintenance changes are just sort of hacked onto the end, and, five years later, you've got a big ball of mud.

5) Isolation Problems. If you have a system that requires a stub or simulator to test (embedded system), you may need to want/need to write automated tests for it.

6) Macros. Any time you want to do something multiple times in a row to save yourself typing, you may want to do automation. Let us say, for example, you have a maintenance fix to a simple data extract. The new change will stop pulling X kind of employees from the database, and start pulling Y.

You could run the programs before and after, and manually compare the data.

OR you could use diff.

And, for that matter, you could write a SELECT COUNT(*) query or two from the database and see that:

The total number of new rows = (old rows + Y members - X member);

This is test automation. Notice that you don't have to have a CS degree to do #6 - and most of the others can be done by a tester pairing up with a developer.


So When should I not do automated testing?

1) To find boundaries. As I hinted at above, automated system-level tests are usually pretty bad at finding bounds. So if you've got a lot of boundary errors, by all means, test manually.

2) As Exploratory Testing. Sometimes when we test our goal is investigation; we play "20 questions" with the software. We learn about the software as we go. This kind of "exploratory testing" can be extremely effective; much more effective than mine-field automated testing.

3) When we don't need to repeat the tests. Some things only need to be tested once - Tab Order. Or may be best tested by a human, because they require some kind of complex analysis.


Sapient Testing
Human beings can be sapient; they can think. Humans can look at a screen and say "Something's not right ..." where a bitmap compare would simply fail or pass. Humans can look at a screen and intuit the most likely parts to fail. So the key is to use the technology to help automate the slow, boring, dumb tasks.

For example, at GTAC this year, one group said that it had 1,000 manual tests that ran overnight - recording a screen capture at the end of each. The next morning, a test engineer sits down with his coffee and compares those screen captures to the previous night - using his brain to figure out which differences matter, and which don't. After he finishes those build verification tests (in about a half hour), he can then get down to pushing the software to it's knees with more interactive, exploratory testing.

Now, that morning coffee .. Is that automated testing, or is it manual? My short answer is I don't really care.

These are just ideas, and they focus almost exclusively on the relationship between Dev and Test. I haven't talked about the relationship between Analyst and Test, for example. Don't take this as Gospel. More to come.

Tuesday, December 04, 2007

You keep using that word ...

Last week I wrote that:

There's something going on here with the way we use terms like "Test" and "Requirement" that causes confusion and misunderstanding. Fundamentally, various groups, like the traditional test community, the "Agile" Developer Community, the Scrum People, and so on, are looking for different benefits from testing. Thus, in conversations, we "miss" each other or leave unsatisfied ... Perhaps that's a post for another day.

Then I spent the next week in a private email discussion over this subject, started by Ben Simo. During that discussion, we identified two major different kinds of tests:

A) Examples to help with the construction process. For example, in a simple Yards-To-Meters conversion program, examples can show the bounds for input, can demonstrate how many decimal places to take the rounding, what to do with garbage input, and provide samples to check the forumula. You could argue that tests as examples can act as requirements. (You Would Be Wrong, but you could argue it). Personally, I hold that these kinds of "construction/example" tests can augment requirements, and that's a real good thing.

B) Tests to learn things about the software. These tests are investigative in nature, and make sure we are not "fooled" into thinking things like "The Software Works" without reason. These investigative tests generally require much more aggressive and advanced critical thinking skills, often in real time.


---> One problem that I've seen is that we talk about these things, "tests", but don't expressly talk about which of the two classes we mean - and, of course, there are more than two classes. On the discussion list, I think Ben summed it up best:

I think we have people thinking that automated "acceptance tests" can replace traditional critical testing. We now have some testers thinking that developers are going to be more involved in the critical testing. People coming from the two viewpoints don't seem to realize that they aren't all talking about the same thing. ... Although there are some testing ideas that are not compatible, I do believe that things like TDD and automated acceptance tests are good as long as people don't think that automated execution replaces restless thinkers. If I had to have one or the other, I want the restless thinkers. However, I hope we can have both.
- Simo

---> Now, for my thoughts. I've spent most of my career with a business card that said "Developer." When I started doing programmer-testing, I did actual critical thinking and "does it work" testing.

Then I ran into various Agile dev/testers that, well, weren't. They were using tests as examples and not finding bugs - or writing blogs, articles, or books about "testing" that didn't really talk about critical thinking.

My initial response to this was:

"What the hell is wrong with you people?"

After some time, I realized that a different philosophy about software testing leads to different results.

If those are the results you actually want, well ... I guess that's ok.

In the free market of ideas, these ideas will compete - and sometimes, complement each other.

May the system that delivers that best result win.

Saturday, December 01, 2007

GASP? ... or not.

Doctors have a rule for sterilization, or "Always Wash Your Hands." Simple things that can be done to improve the outcome of every single project.

At the recent, GLSEC keynote, Bob Martin asked us "We were all doctors, and a hospital administrator called a meeting, telling us that we were spending too much time washing out hands - would we stop doing that?"

Of course not. Medical professionals know the risks involve in skipping that step, and simply won't take it.

Likewise, Accountants have a concepts of "Generally Acceptable Account Principles", or GAAP.

I started writing this blog entry intending to find some "Generally Acceptable Software Principles" (GASP). After all, if accountants and doctors can do it, why not us software guys?

And, in fact, I have a few. My colleague and friend Andy Lester does some speaking and consulting, and his immediate refrain is "Bug Tracking, Version Control, and Daily Backups." That is to say, if your software organization doesn't have these three things, trying to do anything else for "process improvement" is a waste of your time. Get Bug Tracking, Version Control, and Daily backups first.

I recall that Steve McConnell had a few talks on this subject, so I googled around, and found Software Development's Low Hanging Fruit and the Ten Most Important Ideas in Software Engineering.

Now, I like Steve McConnell. I've read his books, we have corresponded a bit, and I've quoted him quite a bit in my writing. For the most part, I like what he has to say. But his low-hanging fruit is nothing like what I would recommend, and his "ten most important ideas" reiterates the classic cost-of-bug-fix curve.

That might be true for some organizations, but it isn't true for all.

I got thinking about GASP because on one email thread this week, we discussed the possibility of having Test Driven Development go mainstream, and I wrote this:


I don't know if TDD will ever go mainstream. My prediction is that we will continue to have a large group of people who are doing a 'good enough' job of development that don't read books or go to conferences. Those people will become managers, and if they hire a new grad who knows TDD, > 80% of those new grads will just follow the prescribed, crappy process instead of looking for a unit test framework for Ada.


One of the great things about software development is that if you have an idea, you can go into your garage and build something cool and sell it. Then you grow, have to maintain legacy code, and process becomes important. All over the world we have companies on a great continuum, between life-critical software and video games for a PDA.

It would seem to me that to require software engineers to be licensed, to require, perhaps, an advanced degree (think law or medicine) and a board of examiners might increase our process, but it would create barriers to entry that would stop a lot of really smart 16-year-olds.

I was once a really smart 16-year old without so much as a high school diploma.

It seems to me that the best thing we can do as an industry, is to not outsource our discernment about what practices are "best" or "right" or "professional", but instead keep that responsibility to ourselves - and live with the consequences of carrying that responsibility. Then we can judge our practitioners by the output they produce.

What do you think?