Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, September 21, 2009

... and thanks for all fish

Just over three years ago, I was having dinner after the Indianapolis QA Conference with Mike Kelly, and he said "Matt, do you even have a blog?"

Well, er, ha ha ha, I've got an old perl user's blog I haven't updated lately, and before that I had a web page I hand-edited to make journal entries before blogging software was popular.

In other words, no, not really.

And Creative Chaos was born.

It's been a good three years. A few of my favorite posts and ideas:

- I wrote a position on tech debt and (with a lot of help from my friends Steve Poling and Patrick Bailey) went on to start a peer conference on the subject
- A definitional piece on the meaning of a test framework.
- The boutique tester idea was proposed right here, just a few months ago.
- Sean McMillan and I proposed the ideas for the "Balanced Breakfast Approach" at the Google Test automation conference, and yes, I've written a little bit about it here.
- Likewise, Sean suggested the Bowl of Fruit problem to me, and I covered how it applies to testing.
- That original IQAA talk I gave? Well, I recorded the audio and put it up as an early post.

Now, I've never employed a "search engine optimizationist", and I don't use META tags. Yet as of today, the number one Google search result for "The Boutique Tester" is this blog. The number one Google result for "Balanced Breakfast Approach Software" is this website. Search for "Bowl of Fruit Problem Software" and yes, Creative Chaos is first. (The number two result for "What is a test framework?" is this website; the first is an online dictionary. I think I can live with that.)

And it's going away.

Oh, no, I'm not going to stop blogging. That's just crazy talk. My blog is moving to be hosted by the folks at the STPCollaborative, and will become "Testing at the Edge of Chaos".

The RSS feed switched over last week, so subscribers should see now difference.

For those who aren't subscribed to the RSS feed, go ahead, switch over the new blog. I've already put my first blog post up.

See ya around!

Thursday, September 17, 2009

KanBan Redux

Well, Yesterday's post on KanBan generated a little bit more heat than I intended. When I clicked submit, as I writer, I thought I had completed an opinion/editorial piece I would stand behind. Heck, I thought it was good writing.

No, wait. I still stand behind it, and I still think it was good writing.

Then again, it could always be better.

I don't want to white-wash what I wrote yesterday by editing it; that would have the effect of blunting legitimate criticism. So, taking a critical eye at what I wrote yesterday, let me add a few things:

- First, my initial mention of certification had nothing to do with Kanban. The second mention - yes, I do expect some kind of Kanban cert will come, even if it's only a "letter of recommendation" from the leaders in the movement. But that section that talked about ISTQB was only designed to point out that I personally had walked away from a "it's gold baby" idea that I thought lacked merit. I suppose the part where I mention the censure of the term "best practice" accomplished this; If I were to re-write it, I would cut that section.

- For the most part, the essay stood firm with showing over telling. This is an important concept in writing - you don't say the hero is brave, you have him fight the dragon. You don't say he's strong; you have him lift a horse or that his arms are as large as tree-trunks. You let the reader decide if the hero is strong. Then I had to end by referring to some Kanban folks as "Jokers." That was uncalled for, and not even what I meant. If I had to do it over again, I would have used something non-judgmental and objective instead. Perhaps "Coaches."

- The initial article introduced Mr. Anderson as a European. Apparently, he took offense to that, and thought my post was "nationalistic." Well, I certainly don't see a benefit to introducing him as European, so I do cut that single word.

- I believe Northern Europeans are innovative with regard to process and product. I believe we should be studying them for process innovations the way the automotive industry learned to study the Japanese. I am completely serious about that.

- Not every person advocating Kanban is advocating the ideals of Frederick W. Taylor, but I have subscribed to the discussion list for months and that was my personal conclusion. As I tried to say with my white hats/black bandannas comment, I did not intend to color the Kanban movement with too broad a brush.

Now, some of the benefits of KanBan:

- The idea of limiting work in progress is one I find fundamentally sound. After all, if the testers are stuck on iteration 1, developers are on iteration 2, and the business analysts are working on iteration 7, something is wrong. The Analysts will create excess inventory ('analyzed' work-to-be-done), it won't be fresh, the business may change it's mind - when the team could take those analysts, cross-training, and otherwise brainstorming ways to change responsibilities around to get iteration 1 done faster. This would decrease overall time-to-market and get more software done in less time.

- Ditto, and very similarly, the idea of achieving pull appeals to me.

- Limiting Work In Progress will have the side effect of limiting multi-tasking; multi-tasking being a well-documented time/effort sink.

- I think it's good to have teams talking about process and debating merits of various ideas. Kanban is stirring the mix; that's good.

- I have to agree that, while a rose by any other name may still smell as sweet, there are some managers and executives who may be strongly opposed to something called "Agile" or confused by the term "Scrum", yet, referred to as "lean", they may be receptive. To some extent, I'm happy to change my terminology in order to better impact and communicate with the rest of our business.

So yes, I'm worried about Kanban. I think it has it's merits, and it also has some risks. If anyone is interested in a spirited debate where we both have potential to learn, please, drop me a line.

Have you heard of KanBan?

My writing colleague, Chris McMahon, has made an attempt to be public and clear about his stance on Kanban. It's been inspiring, and I, too, would like to put my stake in the ground.

In Japan, a kanban is a little card that is used as a signal device. The idea, in manufacturing, is that teams downstream "pull" new work, instead of having work "pushed" to them, which creates bottlenecks.

A gentleman named David Anderson took the idea and applied it to software to create Kanban Development, a surprisingly popular movement, to the point that it has it's own user groups and conferences.

How did David do it? Well, first he was a Theory of Constraints, CMMI, and Agile-Management Guy. He went to Microsoft and worked with an internal development team, where he wrote: "From Worst to Best in 9 Months - Implementing Drum-Buffer-Rope in Microsoft's IT Department." It's interesting. You can read it for yourself, and I'll try to summarize below.

That's right folks; without any specific skills training, any people interaction, or changing of the office environment, Microsoft saw something like a 150% increase in the number of tickets the team could handle in a month. How did they do that?

- Eliminate the time the team spends planning and estimating. Not reduce; eliminate.
- Technical staff took stories that makes sense that they were actually capable of doing (rewards well-written, well-conceived change requests)
- They moved from a push system to a pull system
- They made the process transparent
- They stopped batching up User Acceptance Test and Deployed a ticket at a time
- He got the team out of meetings

Now, the idea of Kanban for software - where we make the work visible by having a board, limit the work in progress, achieve pull, have no fixed iterations but (possibly) continuously deploy, arguably came out of this case study.

Personally, I have a different interpertation: That if you take a team doing CMMI 5 and PSP/TSP and /stop doing/ a lot of required practices, moving your team from 20 hours of meetings a week to three of four, throughput will go up. Further, by working on a story at a time, you'll have technical staff actually talking to each other instead of throwing work over the wall via electronic tools. This will work wonders for eliminating the "hot potato" game.

Finally, and most importantly, if you live in an environment where the customers can make as many ill-conceived change requests as they want, and you have to constantly estimate, evaluate, and shuffle the deck, then take all that away, yes, productivity will go up.

So, I agree, what Mr. Anderson did at Microsoft can work for certain kinds of projects - namely, a maintenance team working on legacy applications that are small, separate, and distinct. That way, you can test one entire 'system' at a time and deploy continuously. (This is pretty much exactly what we did with the small projects team at Priority Health, about the same time, with good results.)

Now, labeling it, calling it Capital-K "Kanban" and giving it out to everyone as a silver bullet to improve the process ... I am not really excited about that.

First of all, it's universalism. "This process worked for me one time so it should work for everyone all the time." Just like labeling "that thing that worked well for us that time" a "best practice", it is a rookie mistake.

But now we have the Kanban discussion list, which I have tried to be involved with. I see a lot of smart people with good ideas, but there are things about it ... something I can't put my finger on just yet. Here's what I'm struggling with:

1) There's something odd about the way this community talks. I mean, I have a master's in CIS, I study software (and manufacturing) process, one of my writing/speaking partners is a Six Sigma black-belt and process engineer, and there's something ... odd there. Why call it a "Value Stream Mapping"? Why not just call it "How we get from concept to cash?" Why is it that skills, training, experience, and expertise just never come up in discussions with these groups? Why is it that instead of talking about development or testing, we call it "workflow" or "process mapping?" I have an inkling on why, and I'll come back to that in number 4, but also ...

2) It seems to me this community uses a lot of 20th century worship words. Productivity. Throughput. Optimize. Lead Time. Cycle Time. Flow. Leveling. There's nothing wrong with these words (although, if you can measure productivity at all is a different discussion.) I see these terms thrown around in a naive, cavalier way. Like "New and Improved", "Hyper-Productive" and "Best in Class", they almost guarantee attention and receptivity for an audience - management and executives.

But does that make them right? Certification is another worship word. And, the day I first heard of ISQTB, at the STAREast conference in 2004, an ISTQB trainer told me literally "You can charge twice as much for training if you give away a certificate at the end of it."

Is that right?

In fact, the Cavalier way those terms are thrown around (compared with, say, the way you'll see metrics talked about on this blog), tells me that there are a number of possibilities, ranging from over-optimism to universalism to genuine deception. I'm not excited about any of them.

3) Kanban works best if you start out slow and stupid. As Dave Nicolette pointed out recently, if hyper-productive means a 10x or so improvement, then the companies likely to see that kind of improvement are traveling at a snail's pace to start with. In other words, if you team is already dragged down, spending 20-40% of it's time planning, estimating, and writing stories for work, that are 6+ months out, then yes, you can see improvements with Kanban. Or, if, say, you batching up work to be release only once or twice a year then do heavy-weight trade offs through an electronic system, instead of having people talk to each other. But in those cases, systems thinking can lead to improvement directly, without using a label or brand.

4) What about people and skills? I don't see any of this in the Kanban literature. It's as if people are cogs that can be interchanged in some sort of machine that is stable, predictable, and repeatable. Hey - wait a minute - I've heard that before! Yesterday I read a Kanban history post that claimed that Toyota had adapted the ideas of Frederick W. Taylor, and Kanban came out of that.

That is factually inaccurate. The Toyota Production system did not come from Taylor, it came from a number of consultants, most notably W. Edwards Deming, as an explicit rejection of the work of Taylor.

I don't have time to get into Taylor and his philosophies, but suffice to say, Taylor was an elitist who believed in separating the worker from the work - having a class of scientific managers tell the workers how to do it - and Deming believed in engaging the worker in the work.

If Kanban comes out from the philosophy of Taylor, then having your process designed by "experts" who don't want to deal with the fiddly-bits of requirements, development, and testing, but instead design a meta-process that turns software development into an assembly-line makes perfect sense. In that world, you might not call it "development" at all, but instead, something like "Workflow" or "Work Products." (Notice issue number one, above.)

If, however, software development is actually knowledge work, which requires the whole person to be engaged, and can be done better or worse -- well, then, hopefully, we'll use the work Taylor as either a door-stop or a cautionary tale.

5) The Kanban movement just isn't interested in discussing testing. I've brought the issue up several times on this list, and get a number of non-answers. That could be because the list members haven't really done much development. Or it could be that they are working on internal applications, where if you type in an invalid entry, the VP of Finance can say "use Internet Explorer Seven ONLY" or "if you want your reimbursement check, ignore the bizarre error, click the back button, and enter it correctly!" Or they could be working on very small, non-connected systems where the testing burden just isn't very high.

But on a real project - a large software project - not something a pair of developers can bang out in three or six months. A project where you want end-users to pay out of pocket, fall in love, and recommend it to friend? Well, a big part of what I do is risk management, and I see continuous deployment with a simple CI suite as naive, perhaps even reckless.

So I see Kanban/deploy per feature moving from limited environments where it can work to general acceptance, and in that, I see serious risk.

Note: In North America, we like our westerns - with Good Guys in white hats and bad guys with bandannas. It would be all too easy to paint the entire Kanban for SW community as "bad." In reality, the ideas are a mixed bag that can be helpful in some environments. Some members of this community are strong system thinkers that have good ideas, and can separate when an idea might work from when it might not, taking in actual feedback and adjusting. Sadly, in general, due to over-hype, I have a final concern ...

6) Some people will actually listen to these jokers. We'll see a lot of hype about Kanban, there will be Kanban certifications, a Kanban alliance, and "Kanban conversions." There will be Kanban instructors, tutorials and lots and lots of books.

And, two years from now, or perhaps five or ten, I expect that a lot of companies will have experienced some critical failures and have a code mess all over the floor. Meanwhile, the consultants will have moved on, embracing and selling a new process - perhaps 5s, or Kaizen. It may not be Japanese at all; it may come from Northern Europe.

Let us all honestly hope that I am wrong.

Tuesday, September 15, 2009

Why is QA Always the Bottleneck?

"Why is QA always the bottleneck?" is the second in a series on how to deal with unfair test questions; it is up this week on SearchSoftwareQuality.com. (Free registration required.)

The next in the series will probably be "how long will testing take?", but i'm curious what you think. What questions do you struggle with, and what interesting answers do you know?

Thursday, September 10, 2009

Life is short - live well

I was reading The Secrets of Closing Sales yesterday and was struck by this line:

Nothing in the world can take the place of persistence.
Talent will not; nothing is more common than unsuccessful men with talent.
Genius will not; unrewarded genius is almost a proverb.
Education will not; the world is full of educated derelicts.
Persistence and determination alone are omnipotent.


I could nitpick some of the words of the quote - but the spirit - that consistency and dedication will win in the long run - is something that resounds with my experience.

Then, later, Jason Huggins pointed me to this blog post by the creator of wordpad. In it, Matt points to this blog post by Tim Ferris that is a gentle introduction to the writing of Seneca.

It's one of the most inspirational things I have read this year.

Go ahead, invest thirty minutes in Senaca. Breathe it in. I believe you'll find it time well spent.

What am I saying when you cross the initial quote with Seneca's commentary? Well, yes, persistence matters. Yes, if you try again and again, you may succeed where others will fail. Just be careful that you don't climb the ladder of success, only to find that it was leaning on the
wrong wall.

Wednesday, September 09, 2009

Test Management Tools

Allright folks, I'll admit it.

I'm not excited about test management tools.

Oh, you could argue that I should be. After all, Test Management tools are purchased by test managers and executives. Test managers and executives have money; they control the budget and decide who goes to what training when. Finding someone's pain point - and taking the pain away - is a perfectly legitimate business strategy. (If they have money to spend, why, that's even better, right?)

Yet I'm still not excited. Why?

Well, let's take a frank look at the thinking behind a test management tool, by which I mean something specific: A keeper of 'test cases', and a tracker of which test cases have been run against which codebase.

It starts with this thinking:

(A) We can define all our 'test cases' up front,
(B) When those test cases pass, our codebase is 'good' (Or, alternatively, when some fail but some decision maker desices to ship anyway),
(C) /Recording/ which test cases have run, and which are yet to run, in precise detail, has some value in and of itself


I reject the premise behind all of these arguments.

Here's an alternative, that we use at Socialtext:

1) Create a single wiki (version, editable web page) page for a release
2) Mark down each type of testing you want to do in every significant combination
3) For example, break the app by major piece of functionality, then further by browser type
4) Add all the automated suites or unit-test results if those matter
5) Have the technical staff 'sign up' for which pieces they will test
6) When testing on a component is completed, the tester writes 'ok' and the bug numbers he found, or, perhaps 'skipped' and the reason why.

For what it's worth, we've been doing this at Socialtext for nearly two years, since before I was hired. We are constantly tweaking the process.

This one-page overview is a higher-level view than a test management tool might provide. It shows you what matters - the failures - not 5,000 "ok" results. It assumes that the test ideas are located somewhere else that the test can find if needed. It assumes the tester actually did the testing and leaves open the possibility that the tester can explore the functionality. It leaves the tester responsible for what 'ok' means, instead of a spreadsheet or document.

This isn't a brand new idea; James Bach recommended something similar in 1999, called a "low tech testing dashboard", only he suggested it be done on a whiteboard. Other people have suggested using a spreadsheet, but that has version and read/write problems.

A wiki is just one more step forward; it provides version control, transparency, and creates a permanent artifact that could be audited. In my mind, this provides some of the benefits of test management tools with much less time investment.

So no, I'm not excited about most test management tools on the market. In many cases, I am suggesting they swat a fly with a sledgehammer. Yet I recognize that test managers and executives have legitimate problems. So let's not rush off to build something to get money; let's come up with real solutions and see if the money flows from there.

Who's with me?

Friday, September 04, 2009

Best New Software Test Writing

Over the summer, I've noticed a trend that bothers me just a little.

Cem Kaner hasn't blogged in months; James Bach hasn't blogged in weeks. Michael Bolton is blogging sporadically; Elisabeth Hendrickson is blogging very occasionally. Ben Simo hasn't blogged since February.

Of the people on my blogroll, only Adam Goucher is consistently writing new blog material.

Now, there may be good reason for this. The people on my blog roll are mostly independent consultants; perhaps the economy is picking up, and they are so busy, that blogging is the first thing to go. Perhaps they are focusing on twitter - or focusing writing on a book. I don't know.

What I do know is that when I click through my blogroll, I'm not seeing a lot that is new.

So I went and asked the Writing-about-testing Yahoo group for some recommendations, are a few we came up with:

Michelle Smith
Pradeep
Catherine Powell
Marlena Compton
Lanette Creamer
Geordie Keitt

Yes, getting to the point where you are known by first name only is a compliment, and yes, that's the same Lanette Creamer who's paper "Testing for the User Experience", won the best paper award at PNSQC 2008. (For those who live near Portland or need and excuse to make the trip: Lanette and Marlena are both speaking at PNSQC this year.)

In addition, all of the students of the Miagi-Do School of testing happen to have a blog. That is no accident. These are people that I personally vouch for as having an interest in, and passion for, software test excellence. While some have English as a second language and are learning to communicate better (as we all are, right?) - they sharpen those skills through blogging. Check them out, please:

Justin Rohrman
Ajay Balamurugadas
Markus Gaertner
Jeroen Rosink

Update: I've also been told that David Christiansen is blogging again. I went and checked and his recent posts have been very tester-centric. Yay!

Wednesday, September 02, 2009

September Software Test&Performance

I just got my copy of the September Issue of Software Test and Performance in the mail yesterday.

Yes, I got a September Magazine on September first. Not August 15th. Not October 5th. The timing is actually right. Amazing.

The theme is on outsourced testing, and yes, Chris McMahon and I have a column on page 8. (And yes, we listed The Boutique Tester as one model of test outsourcing.)

If you register, you can download the PDF - or you can read the article directly on the web.

The new, re-tooled STPMag.com has a comments feature, so please, feel free to put comments up here or on the website.

We're working on a column on coverage right now; if you send us your thoughts early, you could help make a better column ...

BONUS: This week's informationweek had a back-page editorial on outsourcing; I thought you might like to compare and contrast to what Chris and I did for ST&P.

Tuesday, September 01, 2009

Music to test by

About a year ago, Danny Faught and I team-authored an article on Music to test by for the Association for Software Testing's magazine. Sadly, they had a change in editorship, well ... from having one to not having one. (It is a volunteer position)

So the article was never published. I just got an iTunes Gift card and find myself listening for music to test by.


So instead of listening to me pontificate, I am curious: Do you listen to music while you test or code? (Or do you have any music playing in the background while you pair or collaborate?)

I've found that movie soundtracks often work well because they are /designed/ to be on in the background. But I'm curious what you think ...

Scholarship to Software Testing Club!

Do all those paid membership sites get you down?

Do you have a compelling reason that $50 USD per year is too much to pay?

I've provided a scholarship for Software Testing Club. You can tell them why you are worthy and try to get the scholarship yourself.

Good luck. And don't say I never gave you nothin'.

:-)