Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Sunday, January 27, 2008

Technical Debt - VII

The technical debt series took me to an different place that I expected.

I have come to believe that, when it comes to Technical Debt, as an industry ... we have more questions than answers.

Sure, you can use the Nancy Reagan approach and "Just Say No", but the reality is that system factors impact behavior. The motivations to take the quick hack are immediate, positive, and certain, while the negative consequences are delayed and uncertain.

Imagine that you are a technical contributor, weighing your options, considering taking on technical debt. The negative factor is pain later for maintenance or bug fixes. But imagine what goes through your mind -

1) This code might never have to be touched again.
2) If we do have to touch it, I might not work here anymore.
3) If I do work here, we might be able to pass it off to the new guy!

That's a pretty weak negative incentive.

So saying "Just Don't Do It" is a little bit like telling the obese person to diet and exercise. It's technically correct, and yet it doesn't help much. The system factors are hard to beat, but not impossible. Weight Watchers does some amazing things.

How do they do it? Why by finding a way to measure weight and providing certain positive outcomes for success, and support for set backs.

So we need to find a way to quantify technical debt - a way to measure it. We need a way to communicate it to decision makers.

Personally, I believe that half the reason management is so hot to trot about taking shortcuts is that they are invisible. By not being able to measure the consequences of technical debt, technical contributors are doing management a disservice. (And who's choice should it be, anyway? If an administrator were to tell a doctor that he was washing his hands too much and wasn't billable enough, would he stop what he believed to be good sanitation habits?)

Like I said, more questions than answers.

So I have decided to create a completely free, non-profit peer workshop to discuss technical debt. It will probably be two work days long, held in West Michigan. Right now I am securing facilities in the middle August time frame. My co-organizer is Steve Poling; expect a call for participation around the middle of February.

This is not a presentation-style conference. Instead of coming to hear a half-dozen gurus tell you what to do using PowerPoint slides, we will start with a problem (and a bunch of questions) and collaboratively invent some proposed solutions. Then we'll try them and see how they work. The workshop will be by invitation or application only, and will be limited to 15 (at most 20) people.

If you have interest or ideas about the workshop, please feel free to leave a comment or drop me a line.

More to come.

Test-First Development Vs. Test Driven

There's been a interesting amount of discussion on the Software-Testing Yahoo Group recently.

Personally, I have used and I enjoy Test-Driven Development, which is a software-developer process involving writing unit tests before your write code. At the same time, the 'Agile' community has seen a lot of buzz about ... something else, involving getting tests-before-code as examples. I am perfectly fine with this as an augment to requirements, but I don't think it is a replacement, and that does not mean you can turn your brain off because the tests written before the code is developed all pass.

As always, I am impressed by James Bach's ability to take the issue I'm struggling with, put it on the table, and take a stand. Here's an exerpt from one of his more recent posts:

Thinking about how to test a product is a fine thing to do at any time, but I find that developing specific and detailed test procedures from a spec usually leads to poor testing. This is because testing and development is a learning process, not just a doing process. A big part of the learning comes from building the thing and playing with it as it is being built. Even a spec that is very good (and very few are much good at all) does not teach you all that you need to know to conceive of good tests, and even if it did, you can't know that all the good testing ideas have occurred to you early in the process.

Anyone, not just you, who says that "test-first" is a viable strategy needs to do more than just claim experience. You might as well be claiming to have seen Bigfoot. Okay, maybe you have seen him, but what we need is a set of detailed experience reports.

The XP people *have* supported their claims with detailed experience reports, demonstrations, etc. But the part of the testing problem they address is a small fraction of the whole.

The people I've heard who advocate test-first on a system scale (rather than unit-scale), including people who push that interpretation of the V-model, have not, to my knowledge, showed that their method is workable. I'm not even sure what they think they mean by the word "test", but it doesn't seem to match how I think of testing.

Thursday, January 24, 2008

Extreme QA?

Back in 2002, James Canter and Liz Derr wrote a paper on "Extreme QA."

While I don't agree with everything in the article, and I think their use of the term is ... questionable, I do like when people are thinking, trying new things, and writing about the process. Overall, it's an interesting article; you can read it here.

There are also numerous examples of what I would call, well, "bad writing" in the document. That is to say, the authors might have something interesting to say, or they might just be filling up words ... I can't be sure. If you find such examples, or you think I'm completely wrong and judgmental, please, let me know in the comments section. :-)

UPDATE: I'm re-reading the article and continuing to think critically. I don't want to bias you (too much), but let's just say - take it with a grain of salt.

Friday, January 18, 2008

Fail Fast

I just made this post to the SW-IMPROVE discussion list -

Tom Walton Wrote:
I have had people tell me that it impossible to produce any design documentation until after the code is developed for just that reason. They were developing by trial and error. If one thing didn't work quite as expected, then they tried again.

Tom has just described the engineering strategy used by the Wright Brothers to create the first powered, sustained, controlled, heavier than air flight. Yes, they had wind tunnels, but the key is that the wright brothers figured out what would work through empirical experiment instead of speculation and dogma.

It is also the engineering strategy used by the Gossamer Condor - the first human-powered heavier-than-air flight.

The biggest issue with the gossamer condor was weight, so the strategy was basically this: Weaken every component until they break. Then strengthen that component just enough that something else breaks first. When the smithsonian called and asked for the condor and the blueprints, someone had to make up the blueprints. At least one component was simply bent into place.

So, here's my thought: If you are doing ground breaking work - creating a new web 2.0 site that is a wicked problem under tight constraints, then feedback is critically important, and "fail fast" might be the right approach. If you are cranking out yet another Create, Read, Update, Delete web form for the department of defense, then you just might be able to specify the system up front. The problem is that in that world, the developer is a commodity who doesn't add a lot of value.

So, my friends, I'm not really excited about chasing complete, consistent, and correct requirements with a complete spec.(*) In that case, I can't add much value. No, I want the vague and ambiguous problems - the problems that management has a hard time articulating. Then I use craft to ask a lot of questions and collaborativly *invent* a solution.


Regards,


--
Matthew Heusser

(*) - Just as I'm not interested in chasing Big Foot or the Loch Ness Monster.

Tuesday, January 15, 2008

Post Agile Scrum

I've been following the LeanAgileScrum discussion on to Agile Project Management list with some interest. Here's the reply I sent recently:

I would like to relate a story of a personal experience from a few years ago.

I worked with a group that had a rather heavy-weight requirements process. By which I mean, big nasty template with signoffs. I once saw, literally, a five-page requirements document that equated to one line of code.

So, in comes the agile coach, and he says, "this requirements doc is junk. We're going to discuss the requirements and write things on these little index cards ..."

The requirements people first admitted the requirements docs were bad, then fought to keep them tooth and nail.

What's going ON here?

The best explanation I found was that we were taking away the one thing they knew to cling to. Oh, it wasn't very good, but it was a safety blanket. Without that, *now* how do we do our jobs?

I think moving from a heavyweight, Big-Up-Front-Everything shop to a scrum shop is much like that. Many people want prescriptive processes. They want to be told how to do it. They want to be able to follow the process instead of inventing it.

Or, at least, they _think_ they want something like that. If they *have it*, they'll feel constrained by it and hate it and complain about it, but gosh, having a template sure is a lot easier than having a blank sheet of paper. Even if it's a crappy template.

So you see these good ideas like Agile or Scrum institutionalized, procedure-ized, process-ized, turned into certifications ... and someone has to come and invent a new buzzword to say "no, stop being stupid" only more politely.

Today it's called lean, or maybe post-agile. And, if it achieves some good, I'm fine with it.

Philosophy - I

I am giving a talk in April on "Evolution, Revolution, and Test Automation."

Here's the abstract:

How do we know what works in software testing? And how do we prove it?

In this class, you’ll hear a brief discussion of the evolution of scientific knowledge, which leads into the evolution of software testing and test automation. We'll discuss the different way to evaluate statements about software testing, and then apply those to common testing challenges. Starting with the "test triangle analogy," Matt will discuss how the concept of testing has changed over the years, moving quickly from system testing to unit, acceptance, performance, and even mock-based testing, the pros and cons of each, and how to identify them.

Finally, Matt will make some predictions about where testing is going. Not magical, visionary predictions, but instead practical suggestions to take your organization to the next level.

You may not agree with what Matt has to say, but he offers three guarantees:
• You will leave the room thinking
• You will be armed with tangible techniques to evaluate the myriad of "best practices"
• You will not be bored


That's right folks - I'm going to cover the history of scientific thought and apply it to software testing, all in one hour!

... Or, then again, maybe not. It would probably be more accurate to say that I will "Try to hit the high notes."

Which brings me to an interesting problem.

The talk involves a good amount of discussion of the nature of knowledge. To do that, I've got to cover a little bit of philosophy.

After the last time I gave the talk, someone actually came up to me afterward and said "Matt, I really appreciate your point about the Heglian synthesis of thesis and antithesis, but if you are going to have academics in your audience, you've got to use the correct terminology."

I have no idea that that means.

So I went home to my wife, who has a degree in Philosophy, and asked her about it. She replied something like this:

"Matt,there are two kinds of people in your audience. Academics who care about terminology, and do-ers who care about getting things done. You cannot please both. Which group is more common among your attendees?"

When I told her the crowd would be do-ers, she replied "Well, that's easy. You don't have to sound smart to impresss do-ers - you just have to be smart and get things done."

Come to think of it, that's just good advice in general.



If you've got a snowballs chance of making it out to ST&PCon, drop me a line. If not, but you've attended in the past, there is a little website with forums and stuff where you can participate anyway ...

Thursday, January 10, 2008

What I *really* think of *most* software architecture



Read it here.

-- Taken from BugBash.net, my new favorite site. :-)

Wednesday, January 09, 2008

Is it a DSL or an API?

One of the "new new" things in developer-centric testing is using Ruby to create customer acceptance tests in a unique, domain specific testing language.

Chromatic takes a humorous look at "DSL"s in Ruby here.

The Process Process

Special thanks to Ben Simo, who just emailed me a link to this comic strip.

The sad thing is, I'm pretty sure that I actually know the people in the strip! :-)