Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, July 31, 2008

Going to Boston in the Fall?

I went to Software Test and Performance Conference last spring in California and had an absolute blast, met some really interesting people, reconnected with some old friends, and learned a thing or two.

This Fall, Sept. 24-26th, is the Software Test & Performance Conference, Boston. James Bach is keynoting this year, and many, many of the speakers are top talent in the Industry.

I am told that the Extreme Early Bird Rate expires on August 1st - that's an extra $400 off. In addition to a wonderful and stimulating conference, you'll be in the Boston/Cambridge area, the home of Harvard and MIT, and yes, ancient pirate-y history.

In fact, you'll be going to Boston in the fall.

(And for those who suspect this entire post was an excuse to link to a veggie-tales silly song with Larry ... I plead no contest.)

Tuesday, July 29, 2008

June Issue of AST Newsletter is out -

For those who don't know, I am a paid member of the Association for Software Testing.

Yes, my own pocket. No work reimbursement; I spent real money.

The AST puts on an annual conference, has mailing lists, special interest groups, and even has a newletter - and the June Edition of the Newsletter is available online.

The word on the street is that it is moving to print in the next few months.

Don't have time to read the whole thing? Skip to page 12 "The Mythology of Software Testing" compiled by David Christiansen - it's quite a hoot.

Monday, July 28, 2008

If you were a CIO ...

... What would your test policy be? Is an interesting question posed by Annie-Marie Charrett on the SoftwaretestingClub.com Site.

It turns out that Annie-Marie is contributing to an IEEE council on the subject, trying to make a standard.

My first thought was:
- What problem does a testing policy (document) solve?
- When would such a document be appropriate?

Here's my more detailed response:
Look, international standards are great - for products. They are the reason that I can pick up any AA battery and plug it into any AA outlet - they clarify the required form factor, voltage, and tolerances for both.

For _process_, however ... not so much. *HOW* the 1.5 volts is delivered is entirely up to the manufacturer. If he *had* a standard for batteries in the early 1970's, it would have been zinc-carbon. In that case, alkaline would not have been "standards compliant", we'd never have the duracell bunny and our batteries would run out in about 1/10th the time. Oh, and forget about rechargable (ni-cad and later nimH), that's just crazy talk.

The next left of abstraction is to define the process by which those products are made. While we do have some process standards (ISO or Gmp), let's be very careful to not stifle innovation in software testing.

So to be direct: If, as a CIO, I had some *problem* for which I thought a test policy might solve, I suspect I would post the problem on a wiki page and ask my direct reports to solve it.

To quote the one-minute manager - "I don't make decisions for my people."



(Credit where it's due - The seeds of battery analogy come from PeopleWare, 2nd Edition, Pg. 187. Yes, I looked it up.)

Tuesday, July 22, 2008

Community Support, Love and Beauty

Clay Shirky gave a talk that covered community support at SuperNova 2007. During part of the talk he covers the value of community support - things like this blog, or the agile-testing list, or theSW-IMPROVE list.

About nine minutes (perhaps the best nine minutes) of the talk are available online here in quicktime format.

It's a great, inspiring talk about using open-source, and open-source tools to build community. Part of my explicit goal is to build the same thing in the software testing community.

I've had my moments, but I am no Clay Shirky. Watch the video. It's worth it.

Monday, July 21, 2008

Interview with Software Quality Engineering

Software Quality Engineering recently interviewed me for it's Grey Matters podcast, about career management for testers and the use of social software for software engineering. The podcast went up on the podcast page today - but after a month or so it will rotate out. So you can hear the SQE Greymatters interview by direct link.

The Stickyminds Greymatters podcast comes out roughly monthly; you can click and drag this link into iTunes in order to subscribe.

Thursday, July 17, 2008

It's all zero-sum - or - more Peter Drucker

The growth companies of the fifties and sixties promised both more sales and higher profits indefinitely. This alone was reason to distrust them. Every experienced manager should have known that these two objectives are not normally comparable. To produce more sales almost always means to sacrifice immediate profit. To produce higher profit almost always means to sacrifice long-range sales. In almost every case, this irrational promise and the resulting refusal to make balancing decisions between growth and profitability objectives was the direct cause of the large losses and the equally large write-offs of the growth companies in the late sixties and early seventies.

There are few things that distinguish competent from incompetent management quite as sharply as performance in balancing objectives. These is no formula for doing the job. Every business requires it's own balance - and it may require a different balance at different times. Balancing is not a mechanical job. It is a risk-taking decision.

- Peter Drucker, Management Tasks, Responsibilities, Practices

In Software Development, the objectives are time, staff (or "resources"/money), quality, and, perhaps, technical debt or technical investment.

Sure, there's a place for exhorting, for motivation and leadership - just like in parenting. But any parent with anyone more than a three-year-old knows that imploring and exhorting fail. Eventually, you have to create and enforce tough consequences.

Likewise in management, a manager has to make tough choices. Sometimes, sadly, the tough choice is "pretend the everything is fine, and when it goes south, blame the techical people to save my job" - and you can't blame the manager. He's just doing what the system is incenting him to do.

Yet in a sane workspace, what matters is results, and the manager is going to assemble and direct the troops to reach the objective. How high he sets each objective, is a key decision leading to overall success or failure - a risk management decision.

But don't take my word for it; quote Drucker. :-)

Tuesday, July 15, 2008

On Vacuous Documentation

If you read this blog, you know I'm no fan of vacuous documentation. In my experience, "comprehensive" documentation is often not comprehensible, and likely to get stuck in a drawer and forgotten.

By the way - The technical term for that is "waste."

On the other hand, sometimes your are mandated, or required, to fill out the paperwork. To this day I remember the huge amount of documentation we had to produce for 10th grade history. It was so intense that I dropped the course.

Then, a month later in Alegebra 2 class, a friend of mine showed me his A+ homework score ... and, on the second page, where for the answer to question #4 he wrote the entire lyrics of "lucy in the sky with diamonds" as an answer.

Yup, you got it, the teacher either played favorites or merely judged the first page of homework. And it makes sense - with 6*30 = 180 students, with five pages of homework a night, she simply could not grade all the assignments she was given.

I learned something that day.

Fast-forward to the summer of 1993, where I am a flight commander at the Maryland Wing Summer Encampment for Civil Air Patrol. Basically, the program is very similar to the Boy Scouts, with an Air Force feel.

I am a flight commander, in charge of a sergeant and dozen cadets or so. Each night, I had to fill in an evaluation form. The form had me score my experience for the day, somewhere from 1-10.

The first day, I was a 5. It went downhill from there; I wrote 3, 1, -5, "are you even reading this?" -- and never got a reply.

I figured no one was reading the forms; the whole thing was a waste of time. It was just like 10th grade history class.

Fast forward ANOTHER two years, to 1995, where I'm a squadron commander, and have a little more leisure time. I strike up a conversation with the encampment XO, and he says something that really surprises me.

"Oh yeah, Heusser - dude. I remember your eval forms back in '93. Those were hilarious. Absolutely hilarious. We passed them around the entire command staff."

Then I remembered that that year I did not get honor officer or the honor flight award.

There's two edges to this sword. Yes, the forms might be vacuous and silly - AND - just as important - you never know who might be reading them.

When it comes to vacuous documentation - be sure to tread lightly.

Sunday, July 13, 2008

When to automate a test?

Along the lines of the previous post, I've just re-discovered a classic - Marick's Paper on When To Automate A Test.

Oh, it's from 1998, and it's showing it's age, but it lays down some basic dynamics in software testing and gives you tools to determine to automate, to test, or maybe not to test at all.

I think it is great /input/ to help you determine what approach to take.

But what do you think?

Testers Taking Too Long?

We had a question on the Michigan Agile Enthusist's list - the team was doing Scrum, four devs and a tester, and the testing was taking too long. In some cases, it would take the tester six hours to create the (manual) documentation but only three to run the (manual) tests.

Some people suggested it be a "whole team" and share responsibilities. Others suggested it be a "whole team" with no specialties at all. This is my answer:

Pre-Note: It is a whole team effort, and the devs can certainly help test.

However, if you would prefer to keep *some* amount of specialty, I have a few ideas. (Essentially, I am suggesting a little of both approaches)

//---Begin my ideas
Let's apply the theory of constraints to find how to increase the throughput of our tester.

We'll find the bottleneck - what is taking the most time in the process. Then we isolate it, elevate it, increase it's throughput, throw out what is not adding value, and go on to the next thing.

First off, the amount of time spent documenting is just bizarre. I suspect if you ask 'how much of this documentation adds value to you personally, the tester-guy?" it might be a lot less than the "the process" or "what I was told to do" implies. Cut it down.

Second, there is a good bit of evidence from cognitive psychology that high-manually-scripted, repetitive testing both covers less lines of code(1) and also is more prone to inattentive blindness(2, 3) than more exploratory testing, or testing with a more loosely-scripted charter.

So we can go faster by having less documentation and a more loosely-defined charter for testing.

Now let's find the next bottleneck. Commonly, it's one of two things:

(1) The Tester is spending a lot of time in meetings or supporting other projects, or some other activity that does not add value to the project. Eliminate it!

(2) The Tester is spending a lot of time trying to figure out what made a bug trip, documenting the defect explaining it to people, reporting on the status of the defect, re-testing, etc.

The easiest way to eliminate time spent on #2 is to have the devs deliver better quality code to QA in the first place.

I've worked in shops where the code was extremely good quality before it got to test, and test could move along at a good clip. I've worked in shops where that was not the case.

Guess what? In those shops, testers spent a lot of time on #2.

How to increase the quality before it gets to test? I would suggest TDD, and, if it's still buggy before it gets to test, you gotta ask "what's up with that?" and fix it.

My overall suggestions for test strategy -

(A) On the Dev Side, Test Driven Development (TDD)
(B) On the test side, some amount of high bang for the buck automation. Do automation as an investment where the ROI is clear.(4)
(C) PERHAPS, an as-small-as-possible manual test suite
(D) Warmed over with exploratory testing

Of course, I write, speak, and consult on these topics, but I am focusing my professional energies on Socialtext right now. On the east side of the state, for TDD and dev-facing tests, I would recommend Ron and Chet, possibly the Menlo guys. For exploratory and other forms of testing, some of the best testers on the planet are in Indianapolis, and Louise Tamres is local to the Southfield area.

Regards,


--matt heusser
xndev.blogspot.com
(1) - Ref -"James Bach Minefield Analogy"
(2) - Ref Cem Kaner, Black Box Software Testing Video Series
(3) - http://www.youtube.com/watch?v=mAnKvo-fPs0
(4) - See some of Brian Marick's recent work for some of the serious ROI issues companies are having with 100% acceptance-test-automation.

Friday, July 11, 2008

And More from Brian Marick

In sum: compared to doing exploratory testing and TDD right, the testing we’re talking about has modest value. Right now, the cost is more than modest, to the point where I question whether a lot of projects are really getting adequate ROI. I see projects pouring resources into functional testing not because they really value it but more because they know they should value it.

This is strikingly similar to, well, the way that automated testing worked in the pre-Agile era: most often a triumph of hope over experience.

- From his latest blog post

Brian is co-organizing the second Agile Functional Test tool workshop in early August, the day before the Agile 2008 . Instead of increasing the value of automated functional testing, he is interested in techniques to decrease the cost.

-- And they aren't full up yet. If you can't get to tech debt because you're going to Agile 2008 -- you might want to consider going a day early and making this workshop.

Tech Debt Workshop - Update

I'm pleased to report that the tech debt workshop is full up. Aside from people who have already been in contact, I won't be accepting any additional applications.

So let me tell you a bit about the crew we have lined up:

Two of the attendees were original authors/signatories of the agile manifesto. A third was a founding participant of the C3 team - the team that invented Extreme Programming. A fourth is the author of "working effectively with legacy code." A fifth was the lead organizer of the Great Lakes Software Excellence Conference and it's first Master of Ceremonies. A sixth was a key organizer of BarCamp Grand Rapids.

The guy that introduced lightning talks to the software QA community? He'll be there. A couple of people who have given invited tech talks at Google. About half of us have presented at an Agile 200X conference. We've got the outgoing president of *both* the Association for Software Testing and the American Society for Quality. We've got a board member of the Agile Alliance - Brian Marick has written more books about software quality-ish things that the typical developer has read.

We're talking about people who have already changed the world once -- or in the case of the locals, a small corner of it.

Yes, some sessions will be recorded and re-broadcast via youtube.

You could say that my expectations for output are rather high.

Wednesday, July 09, 2008

Be /Good/

Phil Graham recently put out an essay called Be Good. As usual, it is brilliant and insightful - and the challenge to "not just don't be evil, but actually be good" is a striking one.

Near the end of the essay, Graham writes:


You know how there are some people whose names come up in conversation and everyone says "He's such a great guy?" People never say that about me. The best I get is "he means well." I am not claiming to be good. At best I speak good as a second language.


That's about where I stand in the software test community. In fact, I would settle for something like "Matt can be opinionated, highly critical, and sometimes speaks a bit quickly. He is also good at software testing, truly enjoys it, and his integrity is beyond reproach."

Come to think of it, that ain't that bad, after all.

Sidebar 1: Some names keep coming up again and again, and Phil Kirkham is one of them. Phil is a tester/developer in the UK who finally started a blog this month. Phil I look forward to more posts - and - everyone else - here's one to watch.

Sidebar 2: Shrini Kulkarni is a long-time reader, and he recently pointed out that I have slipped in my terminology from "testing" to "QA". The short reason is because I have moved to a company that has a "QA Group."

Now, you would expect me, in typical Heusser fashion, to rail against the term QA, to say this it is impossible for testers to ensure quality, and to try to get the groups named changed. Yet I have not.

What is that all about?

Well, yes, we do have problems where people assume QA will solve the organization's problems, but that is the nature of the beast - I had the exact same problem when I was in development, and as a project manager. People who step up and take responsibility get rewarded with more responsibility. That's just a reality of corporate life.

More importantly, though, our group is a little different. The QA group is made up of rather technical testers - most of us are capable of getting a development or leadership role somewhere else. We chose to be testers. We have real impact on the software process, we have the keys to production, we have the ability to recommend blockers to management, we can ride along and pair program with developers, and, in some cases, change the process on the fly.

Someone is going to ask "how's that working for you?" and my short answer is not too bad, really, but give it six more months and ask again.

This topic deserves exploring. More to come.

Tuesday, July 08, 2008

So, what's next?

I could continue to explore the tech debt metaphor, or tell you what it's like to plan a peer conference of it's size.

I am continuing to read Peter Drucker, and have several gems that I could post and discuss.

I've been considering the formulaic "four steps to success" programs that are so popular in development and testing, and alternatives to them. I have posts on this.

I could explore what I see as two kinds of QA - the intense, critical thinking school and the defined/documented/prevention school - and some of the tragic hilarity that results when people get the two confused.

I have a number of posts planned about th expectations gap between devs, testers, and product management, and how to deal with questions like "Why didn't QA catch that?" or "From now on, you should test for (whatever went wrong today) on every cycle" - or what to do when you realize that the team won't hit the date a month before anyone else does.

... but that's the stuff I'd like to talk about.

What do you want to read? What do you want to hear? What interests you?

Without feedback, I'm just a talking head, floating in space.

Please help me out here ...

Monday, July 07, 2008

My position on Tech Debt - II

Michael Feathers recently wrote on the tech-debt yahoo group:

I think there's a couple of different ways that we can approach all of this. One is to see technical debt as a metaphor, a somewhat fuzzy way of articulating a problem, and hope that it is a powerful enough metaphor to change people's behavior.


And I replied:

That is where I am at; every time someone says "no, it's not debt, it is (fill in slightly better metaphor here)" I recognize they have a point, but I believe tech debt is compelling because it is a metaphor sales guys and VPs and accountants understand.

It's not the "best" metaphor, but it is an extremely simple and straightforward one.

As for metrics, I have strong opinions, and I keep meaning to make a blog post on this that I can turn into a position paper, but it's not happening, so here is my ultra-fast light and quick version:

Tech Debt is simply shortcuts in things that are not measured.

Let me say that again: Another way to think of tech debt is as shortcuts of things that are not measured.

Most orgaizations are capable of measuring the iron triangle: Time, Features, Money (staff). So, when asked for the impossible, technical folks short the thing that is not measured: Quality.

Thanks to Agile methods and bug tracking systems, we have increased the visibility of quality. It's more obvious and harder to short. So to cut corners, we short something else: Maybe it's documentation, maybe it's tests, maybe it's skipping refactoring to get the #$$#%&& thing out the door. It doesn't matter, we skip what isn't measured.

Another term for this is a perverse incentive, or "be careful what you measure, because you are going to get it."

In my experience, an attempt to measure and evaluate tech debt (CrapForJ, Cyclometric complexity) will result in game-playing behavior that will destroy the intent; I even have have a somewhat popular article on the topic.

So far, balanced scorecard is probably the best approach to metrics I know of - where you have a number of metrics that, hopefully, counter-balance each other.

Still, I believe that either (A) they can be easily gamed, (B) you'll spend a lot of energy gathering the metrics, or (C) the metrics will indicate that you (management) don't trust the people you are measuring.

So I recommend three alternatives. First of all, tech debt metrics can be gathered by technical people, used solely for their own benefit, and not digested or evaluated by management. Second, use the metrics to learn what is actually going on - to improve our mental model of the process - instead of to evaluate. Third, use them occasionally, instead of weekly.

So, overall I have little confidence in tech debt metrics, but I am willing to listen. I would be extremely interested in empirical research or experiments, and have an idea or two about funding if someone wants to try.

What do you think?

Thursday, July 03, 2008

What does "Broken" mean?

"The foo feature is Broken"
"Widgets are completly Horked"
"Gadgets are FuBar under IE7"
"FF3 and the wiki no work-ey"

If you talk to any developer, any PM, or read the testing literature, you'll find these are bad descriptions, because they don't tell the reader what the actual problem is, or how to reproduce it.

A "bug" that can't be reproduced is a bug that can't get fixed, and a great way to annoy PM and Devs. This is true; most educated testers strive to provide meaningful bug reports.

Yet if you look through my own bug reports, now and again, you'll see these type of descriptions. Why do I log such things, and what does it mean?

At Socialtext (and to me personally), 'Broken' means that the entire feature is so messed up that you can tell simply by looking at it. It doesn't render properly, or, if the feature requires a submit, it is impossible to get any successful result using any input.

You don't need reproduction steps - you simply need to try to use the software. Whatever you do, it won't work.

In my experience, Testers specialize in exploring the nooks and crannies of the application. We try to find the defects before the customers do. If the feature is broken, exploring is a waste of time -- nothing works, probably because of one single root cause. The thing was never sanity tested; it was never even /poked at/ before being delivered to QA.

Defined and phrased in this way, reporting a feature as broken is not a QA failure; it is a development failure.

Wednesday, July 02, 2008

I read the news today, oh boy ...

This month's Software Test and Performance Magazine has a new monthly column by Chris McMahon and myself. "ST&Pedia" is an encyclopedia for software testers. You can download the issue here. The editor introduces our column on page 7 and our column actually appears on page 13. If you find value in it, subscription to STPMag is free.

In our first issue we cover charters, oracles, the impossibility of complete testing, and heuristics. Upcoming articles will include unit testing (hence my questions below), Testing the Microsoft Stack, Security Testing, and categories of test tools.

So what has confused or annoyed you about Software Security, test tools, or Testing .Net? (No cheap shots, please.) What other areas would you like to see covered?

Editorial minds would like to know ...

Tuesday, July 01, 2008

Management and Metrics - II

After the previous post, I literally said out loud "I have to blog about this", marked the page, and put it down.

I should have kept reading. Here is the contents of the very next paragraph:

... For it is abundantly clear that knowledge cannot be productive unless the knowledge worker finds out who he is himself, and what kind of work he is fitted for, and how he works best. There can be no divorce of planning from doing in knowledge work. On the contrary, the knowledge worker must be able to plan himself. Present entrance jobs, by and large, do not make this possible. They are based on the assumption - valid to some extent for manual work but quite inappropriate to knowledge work - that an outside expert such as the industrial engineer or work-study specialist can objectively determine the one best way for any kind of work to be done. For knowledge work, this is simply not true. There may be one best way, but it is heavily conditioned by the individual and not entirely determined by physical, or even mental, characteristics of the job. It is temperamental as well.

Does this sound familiar? In other words, "there are no best practices. Practices are better or worse in a given context."

But what about all those books that say you have to have a defined, stable, repeatable, managed, institutionalized process?

Having a shared set of expectations between staff and management will reduce friction, and I'm for it -- don't get me wrong.

But sometimes, the best use for books like that is as a doorstop, or maybe firewood.