Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Wednesday, February 27, 2008

Technical Debt - Workshop

I am pleased to announce the first Workshop On Technical Debt (WOTD) will be held August 14/15 on the Campus of Calvin College in West Michigan. The event is organized by Mr. Steve Poling and myself, and will be facilitated by Mike Kelly, president of the Association for Software Testing. We will hold a social the evening of the 13th.

This is not an exhibition conference with hundreds of attendees and prepackaged speeches. That kind of training certainly has a place, but this workshop is different. It is a chance for a dozen people to roll up our sleeves, talk about what happens in the real world, share experiences, ideas, and lessons learned. Every "attendee" will be a participant, expected to be bring ideas and challenge, critique, and improve the ideas of others.

This is a chance to be personally involved in increasing the software development body of understanding. To increase the state of the art - and - possibly - influence the practice.

Oh, and It's going to be an absolute blast.

And free. Yes, that's right, I see no reason to charge for this workshop. I will not be able to cover your travel or food expenses, but we are working out a few sponsorship deals to *help* a little bit with those expenses.

Now, all participants are expected to present something for discussion; that can be a lightning talk, experience report, one-page position paper, case study, simulation ... it doesn't matter. The first draft of The Call for Participation is up on the website.

As I mentioned earlier, seating will be limited to 15 (at most, 20) participants and is by application and invitation only. Details on how to apply are available in the CFP.

Are you in?

Wednesday, February 06, 2008

Hey - up on d.d.j -

Michael Hunter of Microsoft recently interviewed me for his Blog On DDJ.

On re-read, man, do I ramble - but the subject just defies any short, "easy", trite description.

Anyway, the interview is up and you can read it here.

Tuesday, February 05, 2008

Technical Debt - VIII

Update on the Workshop On Technical Debt:

I have a handshake deal for facilities that should be signed in the next two weeks, and most of the logistics are worked out.

We're still targeting August 14/15 in West Michigan - (Possible evening of the 13th)

Details to come.

This is going to fun.

Friday, February 01, 2008

Do you Ning?

Ning.com is a meta-social network. It is similar to myspace, facebook, and all the other new-media thingees, only instead of having a universal network that you are plugged into, it allows it's users to create specifically themese universes, then subscribe to one or more.

I belong to four Ning Rings:

testrepublic - The Asian Subcontinent of India has a huge, growing tech community. Plus Pradeep is a frequent poster.

Drivenqa - Again, this is a test community, with a more European feel. Another change to interact with people I won't meet every day.

stpcon - The community for attendees, alumni, staff and faculty for the Software Test and Performance Conference.

The problem with Ning is that it's addictive. For example, I just put up a response to "Can We Measure A Tester's Performance?" in testrepublic and I could spend all day on it. So, at least, I'm going to try to reuse my answer here on my blog:

I think Pradeep summed it up pretty well with Prose. I'll try to say the same thing in a shorter way, a little more scientifically.

Most measurement systems are just approximations ('proxies') for some other thing. We really have no great way to measure productivity, so we try something else - say bug count. Of course, it's possible that the tester finds a bunch of trivial bugs like spelling errors.

This also creates incentive for the tester to find defects. If you measure the developers by how few defects they create, this creates a conflict of interest that leads to arguments, wasted time, and _decreased_ productivity.

So, you get what you measure, but that's probably not what you actually want. (Cem Kaner has a great paper on "Software Engineering Metrics - who do they measure and how do we know?" - you can google it.)

However, that does not mean that we can not measure anything.

There is a big difference between avoiding NUMERIC, or "quantitative" metrics for evaluation, and simply not measuring.

An alternative to measuring is, well, managing.

I will give you an example:

Do you measure the length of the hairs on your head?

Probably not.

Then how do you know when to get a hair cut?

You can use similar techniques to manage an organization. (I should add that I am find with metrics - in some cases - as information and input. For example, you can use metrics to augment a story. I am just ... leery of them out of context.)