Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, June 30, 2011

Matt Heusser -- Running Man.

I've been nominated to run for one of the open board of directors positions for the Association for Software Testing.

The election will take place August 9th during the Conference for the Association for Software Testing.

Anyone who is a member one month prior to the election (and has internet) can vote.

That means that if you are not a member, and want to vote, you'll need to join AST by July 6th or so.

Membership in AST is $95 per year; I have been a member continuously since 2007 or so, and was a member of the AST_formation Yahoo Group before AST was a thing.

I'm also not good at politics; my experience with elections has not been great.

I do claim, however, to belong to the software-testing community, to make effort on behalf of the community to advance the craft, by doing it, helping others do it, writing and speaking about it. I claim to have both a vision for the future of the profession, and specific ideas that a professional association might do to move the craft forward.

I have been nominated, I am running. If elected, I will serve.

If you'd like to vote for me, you'll have to be a member of AST. If you don't want to vote for me, can I suggest that you look into AST anyway?

I suspect you'll like what you find.

Monday, June 27, 2011

New Articles Up - And Plenty of Them!

I was looking at my personal wiki right now, and noticed that the 'submitted' category seemed far too large. It was; a number of my submitted articles are now published. Here's a quick list:

InformIT.com:
Barriers to Scrum Adoption
Painless Process Improvement

SearchSoftwareQuality.com:
Configuration Management: Does your team have enough?
Defining Configuration Management
Embedded Agile with Nancy Van Schooenderwoert: An Interview
Testing International Applications
Interview with Johanna Rothman: Part I
Interview with Johanna Rothman: Part II
Testing Cloud-Based Applications (Part I)

SoftwareTestPro.com and STQA Magazine:
How Children Learn (To Test)
Ask The Tester With James Bach

... there's a bit more, but I suspect that's enough for today, don't you think?

Friday, June 24, 2011

CAST 2011 Emerging Topics - Get Involved & Deadline

This year, the Conference for the Association for Software Testing is doing something a little different -- creating an Emerging Topics Track that is crowd-sourced.

That's right -- anyone attending the conference can propose a topic, which we expect to be 20-minutes in length, and anyone, anyone at all, can vote topics up or down. Pete Walen and I will be the track moderators.

Of course, you long-term Creative Chaos readers know that.

... so why haven't you voted? Why haven't you entered a talk? (If you have entered a talk, have you told your friends about it?)

We'd like to have an actual, like, you know, track for the conference, and announce it up-front, so people can choose which talks to attend.

That means we have to shut down the process sometime before the conference in order to create and publish the tracks -- publish them early enough that people can make an informed decision.

We expect to close the submissions and voting for the Emerging Topics Track on July 1st, 2011.

That gives you a week to get your proposals and votes in.

Email myself or Pete Walen for an invitation to the wiki.

We really want to make this awesome.

Will you help us?

Wednesday, June 22, 2011

More publications, new blogging, and ... the cloud

As of last week, I've started a general IT blog for the folks at ITKnowledgeExchange.

I'll be writing about the cutting edge of IT applications -- Web Services, Software As a Service, Cloud Technologies, and other "hard" topics in technology ... along with the consequences and people issues of those "hard" technologies.

While I expect that many Creative Chaos readers will be interested in this, I don't know if you all will. If you want to read that blog, you can subscribe to it's RSS Feed, or pay attention to my posts on twitter -- I'll try to keep the posts here to major announcements.

Speaking of which, it's been two weeks, and I've got four posts up:

* Navigating the Waters Introduced the blog and what I'll be trying to do.
* On Cloud Adoption described the two classic attitudes toward the cloud, 'We gotta get this cloud stuffs now' and 'Forgetaboutit' and how to deal with them
* Your First Public Cloud - Part I Describes how to create an Amazon Web Services (AWS) person account - where you'll get 750 CPU hours free -- and also how to set up your first Amazon EC2 instance.
* Part II went on to describe how to connect to that instance with Remote Desktop, what you actually get on the server, and how to shut it down

Whew. And there's more ...

On Monday, Adaptu.com published my first article for them "Five Ways to Live Below Your Means", and more ...

Starting July 5th I'll be on assignment in North Central Indiana, working a full week but available for user groups and possibly writing at night.

Interesting times ... and more to come.

Wednesday, June 15, 2011

Quality at Ford Motor Company

This month's ASQ Influential Voices Post is from the ASQ World Conference on Quality and Improvement. Specifically, it's an interview with Bennie Fowler, Ford Motor Companies Second-ranked executive for quality.

Yes, Bennie is the highest executive with the word "Quality" in his title, but I continue to hold that the CEO is always the chief quality officer -- wether you realize it or not.

But I digress.

It's a good interview.

Good for a few reasons.

First, I admire the Ford Motor Company -- at least a little bit. Yes, they've made some foolish decisions. Yes, they've had some problems with unions, with inefficiencies in the supply chain, with health care coverage and benefits, made some stinker cars. Yet you'd expect that from any company that has survived a hundred years.

The fact that they've weathered those crises says something.

For that matter, Ford Motor Corporation is the last of the "Big 3" American Car Manufacturers still standing. Both General Motors and Chrysler took government bailouts that fundamentally manipulated the free market system.

I don't have time to get into it all here, and I'm not sure that it's appropriate, but let me say these three things: (1) Rescues distort markets, rewarding failure and enabling more (2) General Motors had secured bondholders -- in the event the bonds weren't paid, the bondholders were supposed to repossess and be able to sell hard assets, even in the event of bankruptcy. That agreement was not honored, meaning that in the bailout we lost the rule of law. Most importantly: (3) Bailouts come with strings attached.

Of the big three, only the Ford executives seemed to realize point number three.

So here you have a senior executive at Ford talking in some depth about quality. What did he say?

Quality to Ford Motor Corporation

Bennie ticked off four things: Beauty, Fuel Efficiency, Technology, Safety.

First of all, off the bat, that is important. He actually knew what the company wanted to improve, and how. All four of these are specific, actionable, and mean something to the customer. Notice the kinds of things he did not mention: Internal Process Improvement, "Governance", Driving Waste out of the process, or any hand-wavy stuff.

Process Improvement, Governance, Driving out waste -- all those are good, but they are internal facing, and focus on process, not outcome.

Bennie knows what the customer wants. Just as importantly, he can articulate it.

That kind of vision is going to drive decision making and priorities.

Despite all the power they seem to have, the reality is that an executive can get some finite number of things actually done-done.

I'm serious about this. Think about the executives you worked with who had a new idea a day: Hoe many things did they ever actually get to done-done?

It seems trivial, but ask yourself if you know what those three to six things are for your company, your division, your team.

The Cost of Entry

Toward the end of the interview, Bennie mentioned that one traditional view of quality, the lack of defects, is sort of the cost of entry to play in the game. Sure, it's important, but lack of defects isn't going to distinguish Ford from Toyota or the other Big Boys. It's not a strategic differentiator.

Instead, Bennie suggested that it's the entire user experience -- things that in software we might call UX or Interaction Design -- that are going to make the difference.

Back in software, I'm remind of the massive success of the iPod, verses all the chintzy MP3 players that came before it, or, for that matter, verses the Zune that came after it.

One big difference in software is the number of dimensions of the work, and the stress you can put the work under. The number of test conditions increases exponentially with the inputs, and with a GUI and user-defined workflow it becomes a much larger challenging to say something like the software is "fit for use." (Remember back when we had no GUI and programmer-defined batch processes? Those were the days.)

So, as of today, I do think there is room for differentiation by low defects, but the bar for entry does keep wratcheting up. For example, if users can do something on website in three clicks, and it takes your site ten, and both sites are free ... you've got a problem. If your software takes fifteen seconds to present search results, and the other site does it in five ... you've got a problem.

There are lots of ways to compete. Your company might be huge, have amazing assets, and deploy three thousand people to take an existing product and make it web-based, using it's existing relationships to spread that cost out over hundreds of millions of site licenses.

Then again, you might be three guys working in a founder's basement.

Either way, your company will have to make some tough choices.

On the one hand, yes, it is possible for middle managers and staffers who know the terrain to add some value by deciding which of the "top-16" priorities will actually get done today.

What I am saying is: A little vision can sure go a long way.

Monday, June 06, 2011

On Testing Standards

On the TV Show "Phineas and Ferb", the menu for Slushy Dog has not changed since the story opened in 1929.

When asked about it, Jeremy replies:
"I know. It's awesome right? It's our motto. Slushy Dogs will never get any better."
The comparison to standards in testing is an exercise for the reader.

But hey, if you want to see the quote in context, check it out below, at about one minute in:

Wednesday, June 01, 2011

Model-Driven Testing Dustup

Last week, Alex Knapp, a general technology blogger for Forbes.com, ran a short article on model-based testing.

I took a fair bit of issue with the article, and called him on it. I must say, I was impressed with Alex's response.

First, he followed up his summary post with an interview with a little more depth. Second, the guy called me to dialogue, in a friendly way.

I'm still not impressed with the original piece, and have issues with the interview. What impressed me was the follow-up, the genuine interest to figure out the truth, the willingness to consider both sides of the discussion. As a general-interest "tech" blogger, Mr. Knapp didn't have a deep understanding of testing when he began the process ... but I have the impression he might when he finishes.

Anyway, after posting the interview, he asked me for my feedback, and I gave it over email. Afterwords, we kept talking, and thought it might be worth sharing with, well, everybody else. So here goes ... my reply to the latest interview:

As a tester, I run into this idea all the time -- that we can automate away testing. It seems like every year, a new crop of students graduats from CMU, Berkely, and MIT with CS degrees. (Only thing is: They haven't studied testing.)

What computer scientists do, of course, is write programs to automate business processes. So it makes sense that someone with a CS degree would say "hey, testing, there's a straightforward business process -- we should automate it!"

I do want to give Mr. Bijl's some credit for this strategy -- model-based testing is a more complete, more cost-effective way to test applications than traditional, "linear" test automation.

It's also not new -- Harry Robinson has been championing the idea for going on a decade. You might even check out his website -- Harry has worked at Google, Microsoft, AT&T. He is currently BACK at microsoft on the bing team. Really good guy.

What impresses me about Harry is that he is realistic in what model-driven testing can do.

For example, let's look at Mr. Bijl's rhetoric one more time:

"It enables to automate all phases of software testing: test creation, test execution, and validation of test outcome."

If that were true, then he would basically develop a BUTTON, right? You'd type in a URL and click "test it" and then get test results.

Of course, this can't possibly work. Sure, you could write software to go the URL, look for input fields, type in random inputs and click random buttons. You could get back 404 errors from broken links and such, but, most importantly, the tester software wouldn't know what the tested software should do, so it would have no way to evaluate correctness. Whether it's a simple Fahrenheit-to-Celsius converter or Amazon.com, either way, you need to embed business rules into the test program to predict what the right answer is, and to compare the given answer to "correct."

In software testing, we call this the "Oracle" problem.

That "oracle" is the "model" in model-based testing. Someone still has to program it.

Once you "just" program the model, then you can set your application free on the website, to bounce around amazon.com, sending valid input and looking for errors.

The problem is that little term "just." It turns out that, in most cases, programming a model is exceedingly complex. (Google "The unbearable lightness of model based testing".) Oh, I've done a fair bit of it -- for example, if you have some complex business rules in a database, and need to predict an "answer" to a question for a given userID, you might have two programmers code the application, then compare results with a FOR loop. I have done this.

For more complex applications, especially ones with a GUI, the number of states to transition begins to grow exponentially. Most people applying MBT generally "give up" and use it find errors, because errors codes are easy to predict. The problem being, this doesn't tell you on cases where no error is generated but the business logic is incorrect.

I don't mean to be too critical of MBT -- it's a good tool for your toolbox. Presenting it as the solution to all testing woes, well, yes, I take issue with that. If you'd like to do an interview with Forbes, or moderate a point-counterpoint or such, I'd be interested in it. (I know you are a generalist, maybe Forbes could use a test/dev specialist blogger?)

I'll be at the Conference for the Association for Software Testing (CAST 2011) in August in Seattle, and the Software Test Professionals Conference (STPCon 2011) in October in Dallas. I'm happy to talk more about this.


Here we have an interesting topic, a receptive audience, and the capability to cause a little bit of change. I don't know about you, but I a more than a little tired to the every-batch-of-CS-grads-sees-testing-as-something-to-code-up mentality.

I took a shot. The audience seems receptive; I even proposed a point-counterpoint interview as a next step. Does anyone else have an idea on how to keep the ball rolling?