Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Tuesday, April 29, 2008

Complete, Correct, Consistent, Unambiguous Requirements?

This morning, Shrini Kulkarni sent me an email asking what I thought of requirements based testing approaches. Specifically, his email has this line in it, quoting from a web page:

"XXYYY Inc is the world's leading expert in Requirements Based Testing (RBT). The RBT process first ensures that the specifications are correct, complete, unambiguous, and logically consistent"


Which made my spidey-sense go off. Shrini asked me to elaborate on why and what that means, and liked my reply enough to encourage me to blog about it.

Here goes:

1) On day 90 of a 150-day project, the business senses an opportunity that will allow them to double revenue. They change what the software must do. This renders the perfect spec imperfect.

2) On day 100, the VP of engineering is fired. The new VP of engineering thinks that the fribble feature is "all wrong." The spec is now imperfect.

3) On day 110, the CEO promises BIGCO that the software will do X. The spec is now imperfect.

4) On day 120, the VP of sales interprets a line item as doing X+1, and promises it to the customer.

5) On day 125, the new VP of Engineering brings back the famous consultant Rickaro BlenderBurger, to explain how he personally inspected the specifications to make sure they were unambiguous, and the X+1 promise was not right. The customer responds "But he promised us."

6) On day 126, the CEO creates a time warp field and goes back to day 120, and personally sits in the meeting with an invisibility cloak, to figure out how it is possible for the VP of sales to get the unambiguous document wrong. He brings along a linguistics professor.

The linguistics professor watches the meeting and says "Oh, yeah, that is going to happen. English was pretty much hobbled together by illiterate peasants, and codified by writers like Chaucer who knew just enough Latin to grok the letters. It is a pre-rennesaince, pre-age-of-science language. Anyone who tells you they can write an unambiguous spec in english is selling you something."

---> In other words, to quote James Bach:

"Here is a simple litmus test for an ambiguous spec: It is written in English"

Seriously. Ask an engineer the difference between and English spec and a CAD drawing.

----> People who think you can write an unambiguous spec, so you can slide it under the wall, are denying how most human beings actually communicate and collaborate.

It's not about hand-offs. It's about working together. Or, in Japanese "Gung Ho."

Hence, my spidey sense.

Regards,

--heusser

UPDATE: I am not saying that we should completely reject the idea of requirements or specs and only go by word of mouth. As with many things, the danger is in the extremes. Instead of just picking an extreme, we could have a more healthy conversation about which way we move the needle - to do a little more conversations or a little more documentations.

UPDATE TWO: Over the great back-channel of email, Jonathan Kohl points out that even though the claims of XXXYY Co. may be unrealistic, the ideas and concepts that XXXYY uses could be valuable and might be worth looking into. There are exercises that can be used to get less ambiguous documentation, and they can be helpful. As the expression goes "Don't throw the baby out with the bathwater", and I think he has a point. Thanks, Jon.

Sunday, April 27, 2008

100% Test Automation

A recent post I put out to the Agile-Testing List:

The original poster asked:
> I have been asked to work out a system to make a 100%
> automated testing solution to suit the agile development
> process that the dev team are using.

Perhaps I don't understand what 100% automated testing solution means. When I hear those words, I think that means that every test should be able to run at a click of a button - that no test should ever involve setting up test data or checking the screen personally to make sure the results are "right."

Did I get that right?

If so, has *anyone* on your entire team *ever* actually had success with such an approach? AT ALL?

How will you automate usability testing?

Later on, the author asked:
>I think that while we are all very enthusiastic
>about agile development and automation of the acceptance
>tests we are not entirely sure of the practices that
>we need to implement.

I find it helpful to remind myself that our end goal is to produce working software, not cool automation. So one technique I have used is to write up the automation as stories and let the PM prioritize those stories. That means that sometimes, the automation doesn't get written. That's ok, because the business has made a decision to not invest in the feature - a fancy word for this is "governance." On the other hand, the automation that does come out is a true project - not something you, as a tester, are expected to write on your lunch hour despite other deadlines.

Of course, everything I'm writing above is in reference to customer-facing test automation, not developer-facing. Developer-facing unit tests, in my mind, are just part of the discipline of development, and I simply expect them to happen. If they don't, I expect the devs to either live with the pain or start automating. (If the code released to QA just plain doesn't work, and the devs have no automated unit tests, then I begin a ... collegial conversation among peers ...)

Regards,

--heusser
xndev.blogspot.com

Saturday, April 26, 2008

What is a professional?

For the past few years now, I have heard countless exhortations for software testing to become a "professional" community.

Generally speaking, those exhortations are a sales pitch - come earn by certificate and be a professional.

Sorry, guys, I don't buy it. The root word for professional is PROFESS - to say out loud. To stop saying http://www.blogger.com/img/gl.link.gif"testing is what I do" and start saying "a tester is who I am."

You do not need a certificate on your wall to do that. All you need to do is to *care*.

The next step is to get involved.

Now, not everyone is going to go out in public and speak, and not everyone is going to publish - or even blog. But there are other opportunities.

The fact is, for every guy on stage at a major conference, there are probably two behind the scenes. Printing out ID Badges. Dealing with registration. Putting the website together. For non-profits, the work is generally unpaid, but it has considerable value - you get in the conference free and you get to know other people who share common interests.

Right now, somewhere near you, a conference is calling for volunteers. It might be in West Michigan, it might be PNSQC in Portland, it might be Test2008 in New Delhi, India.

If your company doesn't have budget to go to a conference, I have very simply advice:

Go Anyway.

Friday, April 25, 2008

What is a software architect, anyway?

There is an interesting thread on JoelOnSoftware on the subject.

Here's my follow-up question for you: What is a test architect, and do we need them?

Or, perhaps more meaningful: What kind of problems can a 'test architect' solve? What kinds of problems would that role introduce? If we had one, what would he do?

Wednesday, April 23, 2008

Call for Reviewers

I'm working on an article proposal for a respected softare testing publication. If you would like to review (preview!) the work, please drop me an email at matt.heusser@gmail.com. If you've never published before and are thinking about it, then learning about the process might have some value for you.

--heusser

Trip Report – San Mateo

I just got back from a conference in San Mateo, California, followed by a short stay in Salem, Oregon. Here’s my version of the skinny, in no particular order.

(A) STPCon was great. I got to run into Michael Bolton, Rob Sabourin, finally meet Doug Hoffman and BJ Rollison, and make a few new friends. My biggest complaints were with myself. I did not plan to stay for the whole conference, and I presented so much material that I didn’t get to see many of my peers present. What I did see, however, was spot-on – particular what I heard of Rob Sabourin’s keynote on Scrum.

(B) Both the Grand Rapids International Airport (GRR) and the Portland International Aiport (PDX) have free wireless. Minnepolis/St. Paul and San Francisco, on the other hand, charge just enough that it isn’t worth it. This is good news if you are flying into GRR for the Great Lakes Software Excellence Conference – www.glsec.org - for which planning is finally getting started for 2008. (Sorry - the Call For Papers is not yet public.)

(C) I bought a copy of “Fast Company” for the plane. Long-time readers know that I enjoy it’s sister magazine, Inc, mostly for the inspiring writing. I’ve always been a bit put-off by Fast Company; I don’t quite “get” the concept of a magazine and it feels weird. No, I mean it literally feels odd. Still, I needed reading material for the plane, so I bought an issue to sample. It was good, but not in the way I expected. First off, the magazine is not printed on cheap paper, but instead on 100% recycled paper, which explains the odd feeling. Second, I got two actual real ideas that I intend to take action on, directly from the magazine. For me, that is both unusual and valuable.

(D) Nothing spells success for lightning talks than a comedy of errors. We had problems with the presentation equipment, with getting a mac to project, with powerpoint templates and getting automatic, timed slide advancement working. Not only was it a total blast for the audience (and interesting for the organizers!) it was a great lesson in the ways automation can go wrong.

My favorite part of the lightning talks: Jason Huggins, creator of the selenium test framework, is showing how he tests a google orb. (A google orb is a continuous integration light that lights up red or green based on weather the build passes or fails.) He explains that he lights the bulb, then uses the macbook's digitial camera to take a picture of it, then his test software analyzes the pixels in the picture to determine if the bulb lit up the correct color.

Michael Bolton’s Response was priceless:
“Couldn’t you just look at it?”

(E) I really, really don’t like traveling. The enclosed spaces (planes, hotel) leads to air circulation problems and noise pollution. Flying leads to lots of essentially wasted time and tends to hurt our relationships with actual, well ... people. At the same time, my old traveling regime will have to give way a little bit so I can focus on testing for Socialtext and, well, being a father to my children.

My current problem is a glut of opportunity. I simply can’t do all the good things that are coming my way. The net result is that I will plan to be a little scarce at conferences for the rest of 2008-2010. That may mean more writing or blogging, or maybe something else, but I’m going to cut down on the traveling. If you want to step up to fill the void in the Continental United States, please let me know; I know of several program chairs looking for speakers.

In other news, The original spot for Google Test Automation Conference 2008 was Hyderabad, Indian, but management just announced a venue change to Seattle, Washington, set for an October conference date. I was hoping to use GTAC to get out to the sub-continent of India, an excuse I am still looking for. From here, it looks to me like a trip to India that might become my one big traveling conference in 2009.

Friday, April 18, 2008

GQM - Yellow Brick Road - II

This is the best response I have read to my earlier GQM Request.

It is from James Bach, Originally posted on the Software-Testing Discussion List.

Enjoy:

I don’t like GQM. Here’s why:

It assumes you have nothing to learn!

GQM is appropriate for situations where you understand the precise workings of the system you wish to control.

Example:

GOAL: I wish to back out of my driveway.
QUESTION: Is anyone behind me?
METRIC: turn around and look behind me.

But this is rarely the case with software development projects. In projects, we are trying to learn how things work. A development project is a social system. We don’t just monitor social systems, we must study them. Is X doing testing well? I’ll have to observe his testing and learn how he is doing it. I must be open to surprises while doing so.

My version is called OIM:

1. Observe: What is happening?
2. Inquire: Why is THIS pattern happening? Let’s study that.
3. Model: Here’s my theory about how this project works.

This is not only a cycle, each task is simultaneous. During any of these tasks, at any time, you take action based on your current model of the project. OIM is consistent with social research methods such as Grounded Theory.

This is how we used metrics at Borland, and everywhere I’ve worked since then.

GQM makes it sound like metrics are easy to use and interpret. OIM is all about coping with murky and changing reality.

Friday, April 11, 2008

STPCon Draws near ....

I'll be spending the weekend (or, at least, a lot of it) prepping for the Software Test&Performance Conference in San Mateo, California. If you'll be there, please shoot me a note, we'll find a few minutes to talk and maybe run through some testing challenges.

If you can't make it, I still have something for you - my article "Where Do I Go From Here?" On career management software testers appears in the April Issue of Better Software Magazine. While it is a subscription based magazine, I am allowed to give out copies of my article as a PDF.

Want more? You can can follow the link above for a free issue of the magazine, and a subscription link if you want more after that.

Tuesday, April 08, 2008

Goal Question Metric - The Yellow Brick Road?

I posted this on the software-testing email group yesterday.

The replies have been fascinating; but I'm curious what you think:

Hello Folks.

Many people here have heard my own life stories about programming and testing; how, essentially, I kept getting "patted on the head' and told that I "Didn't Get It" because I opposed big extensible designs, rituals, signoffs and handoffs in the development process, and expensive, heavyweight test case programs.

I stopped worrying about it when I realized that my projects were far more successful than my peers. Eventually, I started talking about it openly.

Metrics are currently on that list. I have in my study the handbook of software quality assurance, 3rd edition, that contains a list of about 150 qualities (like scalability, security, etc) that can be measured. Then it tells you that one or two metrics will cause dysfunction, you need a balanced scorcard. And that the easy-to-gather metrics are also easy to game and bad, but that the good metrics are expensive to measure. Oh, and be careful, because your engineering staff will rebel if they have to spend too much time gathering metrics instead of doing work.

To summarize, this is what the book has to say about metrics:

"Good Luck."


Which brings me to my next sacred cow: Goal Question Metric.

GQM is a framework written by Victor Basili; you can google it. The basic idea is that instead of gathering a bunch of metrics, you actually figure out your goal (like "faster production"), ask a question that will help measure that goal, and turn that into a metric.

I have to grant that this is an intellectually valid framework, and it beats the pants off of mindless gathering of numbers. For software testing, the idea has been endorsed by people I respect like Cem Kaner and Lee Copleand.

Here's my problem: This idea has been around for a long time. When It comes to software testing, I've read a great deal of the literature, been to the conferences, read a lot of blogs.

Except for a few examples from people like Lee Copeland and James Bach here's what I always see: "If you want metrics, use GQM. Since all contexts are different, I can't give you an example."

pschaw. Is it too much for me to ask for a case study before I invest time, energy, and effort into a metrics program? One with positive ROI? Enough positive ROI that I wouldn't be better off working on other projects, or sticking the money I would have spent in a CD?


It's been 13 years since the first GQM paper was published. I haven't seen GQM provide it's value in a software testing context.(*)

Have you? I would be really interested in success stories, please.

Regards,


--heusser
(*) - Please don't say NASA. They work under an entirely different set of constraints than commercial software development. And even then, the business case is shaky.


UPDATE: Dr. Kaner replied that he doesn't really 'endorse' GQM as much as he simply mentions it during talks. His overall comments are along the lines of "GQM looks interesting, it's more grounded than nothing - if it works for you, good for you."

Monday, April 07, 2008

The Nine Forgettings

"If you measure the wrong thing, and you reward the wrong thing, don't be surprised if you get the wrong thing."
-- Lee Copeland

Lee's talk is about 34 minutes and challenges how the typical company does software testing.

It's a good watch, and it's free.

Thursday, April 03, 2008

Explaining Exploratory Testing

I'm a big fan of exploratory testing. I believe it can be a much more effective method of software testing than hard-coded automation or manual scripts, both of which suffer from both inattentional blindness and the minefield problem.

The problem is explaining it. People seem to fall into two camps -

1) Those who believe exploratory testing is undisciplined hacking and will result in sloppy, inconsisent - but generally low-value results and,

2) Those who know that Sally can bring an app to it's knees in five seconds, but don't know how. This leads to the belief that ET is an attribute that certain people are simply born with, that it can't be taught, or that ET is some sort of black art that involves years of study and some sorts of secretive, back-room rituals.

I suspect the truth is somewhere in the middle; that ET is a skill that can be learned, practiced, and you can get better with it over time.

About a week ago we had this discussion on ET on the Agile Testing Email List, and I suggested that we record a simple video and put it up on youtube.

The video would show a website, give it an exploratory tester, and show what he actually does. It would probably take a few takes. First we would record what he does, then he'd watch the tape, and do the exact same thing again - but explaining his thinking process along the way.

My goal was to de-mystify exploratory testing and demonstrate immediate value - not whining that the specs are incomplete, and inconsistent, or whining that QA should have been involved earlier.

In the mean time, I went out to Google Video and checked to see if anyone else had taken this idea. While I did not find demonstrations of ET per se, I did find a Jon Bach Google test talk on this exact subject - the happy medium that treats ET as a skill, and how to get better at it. (And yes, he does a little demonstration in the middle)

Here's the link

If you read Creative Chaos, you probably don't need it - but your boss might. Or his boss might. Or your cubemate might. It doesn't get much cooler than "Well, the staff at Google HQ in MountainView thought this guy was worth listening to ..."

Heck, invite the team to a brown bag and just show the video. :-)

As for the "what an exploratory tester actually does", I think that would make a good video and am seriously considering purchasing a video camera to do it. Or, we may just record it at WOTD between sessions, I dunno.

Wednesday, April 02, 2008

Lean-Agile Bacon Driven Programming ...

If you're going to program bacon-driven, it might be better to eat lean bacon.

Just a thought.

Tuesday, April 01, 2008

Bacon-Driven Development!

As most folks know, I am increasingly disappointed in X-Driven-Y. Test Driven Development, the Original X-Driven-Y, was an actual thing that you could do, measure, and see positive results in. I *love* TDD.

Sometime after TDD we saw "Acceptance Test Driven Development", "Behavior Driven Development", "Responsibility Driven Design", "User Story Driven Development" ... the list goes on.

It seems to me that with each new X-Driven-Y we dilute the value of the concept, just like we did with the word "Extreme" in the late 1990's.

So, I am happy to report, at least someone is doing something interesting with it.

Sean McMillan, the founder of Bacon Driven Methods, just put up a blog site on the subject: Bacon Driven Coding. With any luck, BDC will attract the following it truly deserves and will be able to develop a Bacon-Driven Body of Knowledge (BDBOK) that can serve the world's bacon-driven needs.

One thing Sean hasn't addressed yet is adoption and buy-in, especially from senior management. Personally, I've found the best approach is to simply eat the bacon yourself, show some results, and get others to eat Bacon. I honestly have not seen much success with Top-Down implementations of BCD; BCD seems to defy such traditional command-and-control methods.

For those of you skeptical about BDC, I understand. But let me ask you - seriously - have you tried it? If you've been critical about BDC without trying it, I would encourage you to give it a shot.