Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, September 21, 2009

... and thanks for all fish

Just over three years ago, I was having dinner after the Indianapolis QA Conference with Mike Kelly, and he said "Matt, do you even have a blog?"

Well, er, ha ha ha, I've got an old perl user's blog I haven't updated lately, and before that I had a web page I hand-edited to make journal entries before blogging software was popular.

In other words, no, not really.

And Creative Chaos was born.

It's been a good three years. A few of my favorite posts and ideas:

- I wrote a position on tech debt and (with a lot of help from my friends Steve Poling and Patrick Bailey) went on to start a peer conference on the subject
- A definitional piece on the meaning of a test framework.
- The boutique tester idea was proposed right here, just a few months ago.
- Sean McMillan and I proposed the ideas for the "Balanced Breakfast Approach" at the Google Test automation conference, and yes, I've written a little bit about it here.
- Likewise, Sean suggested the Bowl of Fruit problem to me, and I covered how it applies to testing.
- That original IQAA talk I gave? Well, I recorded the audio and put it up as an early post.

Now, I've never employed a "search engine optimizationist", and I don't use META tags. Yet as of today, the number one Google search result for "The Boutique Tester" is this blog. The number one Google result for "Balanced Breakfast Approach Software" is this website. Search for "Bowl of Fruit Problem Software" and yes, Creative Chaos is first. (The number two result for "What is a test framework?" is this website; the first is an online dictionary. I think I can live with that.)

And it's going away.

Oh, no, I'm not going to stop blogging. That's just crazy talk. My blog is moving to be hosted by the folks at the STPCollaborative, and will become "Testing at the Edge of Chaos".

The RSS feed switched over last week, so subscribers should see now difference.

For those who aren't subscribed to the RSS feed, go ahead, switch over the new blog. I've already put my first blog post up.

See ya around!

Thursday, September 17, 2009

KanBan Redux

Well, Yesterday's post on KanBan generated a little bit more heat than I intended. When I clicked submit, as I writer, I thought I had completed an opinion/editorial piece I would stand behind. Heck, I thought it was good writing.

No, wait. I still stand behind it, and I still think it was good writing.

Then again, it could always be better.

I don't want to white-wash what I wrote yesterday by editing it; that would have the effect of blunting legitimate criticism. So, taking a critical eye at what I wrote yesterday, let me add a few things:

- First, my initial mention of certification had nothing to do with Kanban. The second mention - yes, I do expect some kind of Kanban cert will come, even if it's only a "letter of recommendation" from the leaders in the movement. But that section that talked about ISTQB was only designed to point out that I personally had walked away from a "it's gold baby" idea that I thought lacked merit. I suppose the part where I mention the censure of the term "best practice" accomplished this; If I were to re-write it, I would cut that section.

- For the most part, the essay stood firm with showing over telling. This is an important concept in writing - you don't say the hero is brave, you have him fight the dragon. You don't say he's strong; you have him lift a horse or that his arms are as large as tree-trunks. You let the reader decide if the hero is strong. Then I had to end by referring to some Kanban folks as "Jokers." That was uncalled for, and not even what I meant. If I had to do it over again, I would have used something non-judgmental and objective instead. Perhaps "Coaches."

- The initial article introduced Mr. Anderson as a European. Apparently, he took offense to that, and thought my post was "nationalistic." Well, I certainly don't see a benefit to introducing him as European, so I do cut that single word.

- I believe Northern Europeans are innovative with regard to process and product. I believe we should be studying them for process innovations the way the automotive industry learned to study the Japanese. I am completely serious about that.

- Not every person advocating Kanban is advocating the ideals of Frederick W. Taylor, but I have subscribed to the discussion list for months and that was my personal conclusion. As I tried to say with my white hats/black bandannas comment, I did not intend to color the Kanban movement with too broad a brush.

Now, some of the benefits of KanBan:

- The idea of limiting work in progress is one I find fundamentally sound. After all, if the testers are stuck on iteration 1, developers are on iteration 2, and the business analysts are working on iteration 7, something is wrong. The Analysts will create excess inventory ('analyzed' work-to-be-done), it won't be fresh, the business may change it's mind - when the team could take those analysts, cross-training, and otherwise brainstorming ways to change responsibilities around to get iteration 1 done faster. This would decrease overall time-to-market and get more software done in less time.

- Ditto, and very similarly, the idea of achieving pull appeals to me.

- Limiting Work In Progress will have the side effect of limiting multi-tasking; multi-tasking being a well-documented time/effort sink.

- I think it's good to have teams talking about process and debating merits of various ideas. Kanban is stirring the mix; that's good.

- I have to agree that, while a rose by any other name may still smell as sweet, there are some managers and executives who may be strongly opposed to something called "Agile" or confused by the term "Scrum", yet, referred to as "lean", they may be receptive. To some extent, I'm happy to change my terminology in order to better impact and communicate with the rest of our business.

So yes, I'm worried about Kanban. I think it has it's merits, and it also has some risks. If anyone is interested in a spirited debate where we both have potential to learn, please, drop me a line.

Have you heard of KanBan?

My writing colleague, Chris McMahon, has made an attempt to be public and clear about his stance on Kanban. It's been inspiring, and I, too, would like to put my stake in the ground.

In Japan, a kanban is a little card that is used as a signal device. The idea, in manufacturing, is that teams downstream "pull" new work, instead of having work "pushed" to them, which creates bottlenecks.

A gentleman named David Anderson took the idea and applied it to software to create Kanban Development, a surprisingly popular movement, to the point that it has it's own user groups and conferences.

How did David do it? Well, first he was a Theory of Constraints, CMMI, and Agile-Management Guy. He went to Microsoft and worked with an internal development team, where he wrote: "From Worst to Best in 9 Months - Implementing Drum-Buffer-Rope in Microsoft's IT Department." It's interesting. You can read it for yourself, and I'll try to summarize below.

That's right folks; without any specific skills training, any people interaction, or changing of the office environment, Microsoft saw something like a 150% increase in the number of tickets the team could handle in a month. How did they do that?

- Eliminate the time the team spends planning and estimating. Not reduce; eliminate.
- Technical staff took stories that makes sense that they were actually capable of doing (rewards well-written, well-conceived change requests)
- They moved from a push system to a pull system
- They made the process transparent
- They stopped batching up User Acceptance Test and Deployed a ticket at a time
- He got the team out of meetings

Now, the idea of Kanban for software - where we make the work visible by having a board, limit the work in progress, achieve pull, have no fixed iterations but (possibly) continuously deploy, arguably came out of this case study.

Personally, I have a different interpertation: That if you take a team doing CMMI 5 and PSP/TSP and /stop doing/ a lot of required practices, moving your team from 20 hours of meetings a week to three of four, throughput will go up. Further, by working on a story at a time, you'll have technical staff actually talking to each other instead of throwing work over the wall via electronic tools. This will work wonders for eliminating the "hot potato" game.

Finally, and most importantly, if you live in an environment where the customers can make as many ill-conceived change requests as they want, and you have to constantly estimate, evaluate, and shuffle the deck, then take all that away, yes, productivity will go up.

So, I agree, what Mr. Anderson did at Microsoft can work for certain kinds of projects - namely, a maintenance team working on legacy applications that are small, separate, and distinct. That way, you can test one entire 'system' at a time and deploy continuously. (This is pretty much exactly what we did with the small projects team at Priority Health, about the same time, with good results.)

Now, labeling it, calling it Capital-K "Kanban" and giving it out to everyone as a silver bullet to improve the process ... I am not really excited about that.

First of all, it's universalism. "This process worked for me one time so it should work for everyone all the time." Just like labeling "that thing that worked well for us that time" a "best practice", it is a rookie mistake.

But now we have the Kanban discussion list, which I have tried to be involved with. I see a lot of smart people with good ideas, but there are things about it ... something I can't put my finger on just yet. Here's what I'm struggling with:

1) There's something odd about the way this community talks. I mean, I have a master's in CIS, I study software (and manufacturing) process, one of my writing/speaking partners is a Six Sigma black-belt and process engineer, and there's something ... odd there. Why call it a "Value Stream Mapping"? Why not just call it "How we get from concept to cash?" Why is it that skills, training, experience, and expertise just never come up in discussions with these groups? Why is it that instead of talking about development or testing, we call it "workflow" or "process mapping?" I have an inkling on why, and I'll come back to that in number 4, but also ...

2) It seems to me this community uses a lot of 20th century worship words. Productivity. Throughput. Optimize. Lead Time. Cycle Time. Flow. Leveling. There's nothing wrong with these words (although, if you can measure productivity at all is a different discussion.) I see these terms thrown around in a naive, cavalier way. Like "New and Improved", "Hyper-Productive" and "Best in Class", they almost guarantee attention and receptivity for an audience - management and executives.

But does that make them right? Certification is another worship word. And, the day I first heard of ISQTB, at the STAREast conference in 2004, an ISTQB trainer told me literally "You can charge twice as much for training if you give away a certificate at the end of it."

Is that right?

In fact, the Cavalier way those terms are thrown around (compared with, say, the way you'll see metrics talked about on this blog), tells me that there are a number of possibilities, ranging from over-optimism to universalism to genuine deception. I'm not excited about any of them.

3) Kanban works best if you start out slow and stupid. As Dave Nicolette pointed out recently, if hyper-productive means a 10x or so improvement, then the companies likely to see that kind of improvement are traveling at a snail's pace to start with. In other words, if you team is already dragged down, spending 20-40% of it's time planning, estimating, and writing stories for work, that are 6+ months out, then yes, you can see improvements with Kanban. Or, if, say, you batching up work to be release only once or twice a year then do heavy-weight trade offs through an electronic system, instead of having people talk to each other. But in those cases, systems thinking can lead to improvement directly, without using a label or brand.

4) What about people and skills? I don't see any of this in the Kanban literature. It's as if people are cogs that can be interchanged in some sort of machine that is stable, predictable, and repeatable. Hey - wait a minute - I've heard that before! Yesterday I read a Kanban history post that claimed that Toyota had adapted the ideas of Frederick W. Taylor, and Kanban came out of that.

That is factually inaccurate. The Toyota Production system did not come from Taylor, it came from a number of consultants, most notably W. Edwards Deming, as an explicit rejection of the work of Taylor.

I don't have time to get into Taylor and his philosophies, but suffice to say, Taylor was an elitist who believed in separating the worker from the work - having a class of scientific managers tell the workers how to do it - and Deming believed in engaging the worker in the work.

If Kanban comes out from the philosophy of Taylor, then having your process designed by "experts" who don't want to deal with the fiddly-bits of requirements, development, and testing, but instead design a meta-process that turns software development into an assembly-line makes perfect sense. In that world, you might not call it "development" at all, but instead, something like "Workflow" or "Work Products." (Notice issue number one, above.)

If, however, software development is actually knowledge work, which requires the whole person to be engaged, and can be done better or worse -- well, then, hopefully, we'll use the work Taylor as either a door-stop or a cautionary tale.

5) The Kanban movement just isn't interested in discussing testing. I've brought the issue up several times on this list, and get a number of non-answers. That could be because the list members haven't really done much development. Or it could be that they are working on internal applications, where if you type in an invalid entry, the VP of Finance can say "use Internet Explorer Seven ONLY" or "if you want your reimbursement check, ignore the bizarre error, click the back button, and enter it correctly!" Or they could be working on very small, non-connected systems where the testing burden just isn't very high.

But on a real project - a large software project - not something a pair of developers can bang out in three or six months. A project where you want end-users to pay out of pocket, fall in love, and recommend it to friend? Well, a big part of what I do is risk management, and I see continuous deployment with a simple CI suite as naive, perhaps even reckless.

So I see Kanban/deploy per feature moving from limited environments where it can work to general acceptance, and in that, I see serious risk.

Note: In North America, we like our westerns - with Good Guys in white hats and bad guys with bandannas. It would be all too easy to paint the entire Kanban for SW community as "bad." In reality, the ideas are a mixed bag that can be helpful in some environments. Some members of this community are strong system thinkers that have good ideas, and can separate when an idea might work from when it might not, taking in actual feedback and adjusting. Sadly, in general, due to over-hype, I have a final concern ...

6) Some people will actually listen to these jokers. We'll see a lot of hype about Kanban, there will be Kanban certifications, a Kanban alliance, and "Kanban conversions." There will be Kanban instructors, tutorials and lots and lots of books.

And, two years from now, or perhaps five or ten, I expect that a lot of companies will have experienced some critical failures and have a code mess all over the floor. Meanwhile, the consultants will have moved on, embracing and selling a new process - perhaps 5s, or Kaizen. It may not be Japanese at all; it may come from Northern Europe.

Let us all honestly hope that I am wrong.

Tuesday, September 15, 2009

Why is QA Always the Bottleneck?

"Why is QA always the bottleneck?" is the second in a series on how to deal with unfair test questions; it is up this week on SearchSoftwareQuality.com. (Free registration required.)

The next in the series will probably be "how long will testing take?", but i'm curious what you think. What questions do you struggle with, and what interesting answers do you know?

Thursday, September 10, 2009

Life is short - live well

I was reading The Secrets of Closing Sales yesterday and was struck by this line:

Nothing in the world can take the place of persistence.
Talent will not; nothing is more common than unsuccessful men with talent.
Genius will not; unrewarded genius is almost a proverb.
Education will not; the world is full of educated derelicts.
Persistence and determination alone are omnipotent.


I could nitpick some of the words of the quote - but the spirit - that consistency and dedication will win in the long run - is something that resounds with my experience.

Then, later, Jason Huggins pointed me to this blog post by the creator of wordpad. In it, Matt points to this blog post by Tim Ferris that is a gentle introduction to the writing of Seneca.

It's one of the most inspirational things I have read this year.

Go ahead, invest thirty minutes in Senaca. Breathe it in. I believe you'll find it time well spent.

What am I saying when you cross the initial quote with Seneca's commentary? Well, yes, persistence matters. Yes, if you try again and again, you may succeed where others will fail. Just be careful that you don't climb the ladder of success, only to find that it was leaning on the
wrong wall.

Wednesday, September 09, 2009

Test Management Tools

Allright folks, I'll admit it.

I'm not excited about test management tools.

Oh, you could argue that I should be. After all, Test Management tools are purchased by test managers and executives. Test managers and executives have money; they control the budget and decide who goes to what training when. Finding someone's pain point - and taking the pain away - is a perfectly legitimate business strategy. (If they have money to spend, why, that's even better, right?)

Yet I'm still not excited. Why?

Well, let's take a frank look at the thinking behind a test management tool, by which I mean something specific: A keeper of 'test cases', and a tracker of which test cases have been run against which codebase.

It starts with this thinking:

(A) We can define all our 'test cases' up front,
(B) When those test cases pass, our codebase is 'good' (Or, alternatively, when some fail but some decision maker desices to ship anyway),
(C) /Recording/ which test cases have run, and which are yet to run, in precise detail, has some value in and of itself


I reject the premise behind all of these arguments.

Here's an alternative, that we use at Socialtext:

1) Create a single wiki (version, editable web page) page for a release
2) Mark down each type of testing you want to do in every significant combination
3) For example, break the app by major piece of functionality, then further by browser type
4) Add all the automated suites or unit-test results if those matter
5) Have the technical staff 'sign up' for which pieces they will test
6) When testing on a component is completed, the tester writes 'ok' and the bug numbers he found, or, perhaps 'skipped' and the reason why.

For what it's worth, we've been doing this at Socialtext for nearly two years, since before I was hired. We are constantly tweaking the process.

This one-page overview is a higher-level view than a test management tool might provide. It shows you what matters - the failures - not 5,000 "ok" results. It assumes that the test ideas are located somewhere else that the test can find if needed. It assumes the tester actually did the testing and leaves open the possibility that the tester can explore the functionality. It leaves the tester responsible for what 'ok' means, instead of a spreadsheet or document.

This isn't a brand new idea; James Bach recommended something similar in 1999, called a "low tech testing dashboard", only he suggested it be done on a whiteboard. Other people have suggested using a spreadsheet, but that has version and read/write problems.

A wiki is just one more step forward; it provides version control, transparency, and creates a permanent artifact that could be audited. In my mind, this provides some of the benefits of test management tools with much less time investment.

So no, I'm not excited about most test management tools on the market. In many cases, I am suggesting they swat a fly with a sledgehammer. Yet I recognize that test managers and executives have legitimate problems. So let's not rush off to build something to get money; let's come up with real solutions and see if the money flows from there.

Who's with me?

Friday, September 04, 2009

Best New Software Test Writing

Over the summer, I've noticed a trend that bothers me just a little.

Cem Kaner hasn't blogged in months; James Bach hasn't blogged in weeks. Michael Bolton is blogging sporadically; Elisabeth Hendrickson is blogging very occasionally. Ben Simo hasn't blogged since February.

Of the people on my blogroll, only Adam Goucher is consistently writing new blog material.

Now, there may be good reason for this. The people on my blog roll are mostly independent consultants; perhaps the economy is picking up, and they are so busy, that blogging is the first thing to go. Perhaps they are focusing on twitter - or focusing writing on a book. I don't know.

What I do know is that when I click through my blogroll, I'm not seeing a lot that is new.

So I went and asked the Writing-about-testing Yahoo group for some recommendations, are a few we came up with:

Michelle Smith
Pradeep
Catherine Powell
Marlena Compton
Lanette Creamer
Geordie Keitt

Yes, getting to the point where you are known by first name only is a compliment, and yes, that's the same Lanette Creamer who's paper "Testing for the User Experience", won the best paper award at PNSQC 2008. (For those who live near Portland or need and excuse to make the trip: Lanette and Marlena are both speaking at PNSQC this year.)

In addition, all of the students of the Miagi-Do School of testing happen to have a blog. That is no accident. These are people that I personally vouch for as having an interest in, and passion for, software test excellence. While some have English as a second language and are learning to communicate better (as we all are, right?) - they sharpen those skills through blogging. Check them out, please:

Justin Rohrman
Ajay Balamurugadas
Markus Gaertner
Jeroen Rosink

Update: I've also been told that David Christiansen is blogging again. I went and checked and his recent posts have been very tester-centric. Yay!

Wednesday, September 02, 2009

September Software Test&Performance

I just got my copy of the September Issue of Software Test and Performance in the mail yesterday.

Yes, I got a September Magazine on September first. Not August 15th. Not October 5th. The timing is actually right. Amazing.

The theme is on outsourced testing, and yes, Chris McMahon and I have a column on page 8. (And yes, we listed The Boutique Tester as one model of test outsourcing.)

If you register, you can download the PDF - or you can read the article directly on the web.

The new, re-tooled STPMag.com has a comments feature, so please, feel free to put comments up here or on the website.

We're working on a column on coverage right now; if you send us your thoughts early, you could help make a better column ...

BONUS: This week's informationweek had a back-page editorial on outsourcing; I thought you might like to compare and contrast to what Chris and I did for ST&P.

Tuesday, September 01, 2009

Music to test by

About a year ago, Danny Faught and I team-authored an article on Music to test by for the Association for Software Testing's magazine. Sadly, they had a change in editorship, well ... from having one to not having one. (It is a volunteer position)

So the article was never published. I just got an iTunes Gift card and find myself listening for music to test by.


So instead of listening to me pontificate, I am curious: Do you listen to music while you test or code? (Or do you have any music playing in the background while you pair or collaborate?)

I've found that movie soundtracks often work well because they are /designed/ to be on in the background. But I'm curious what you think ...

Scholarship to Software Testing Club!

Do all those paid membership sites get you down?

Do you have a compelling reason that $50 USD per year is too much to pay?

I've provided a scholarship for Software Testing Club. You can tell them why you are worthy and try to get the scholarship yourself.

Good luck. And don't say I never gave you nothin'.

:-)

Saturday, August 29, 2009

Test Challenge - Free -

Would you like some test training absolutely free?

Free like water, free like air?

Often, the best way to get it is to help someone else develop training material. To test the training material, so to speak.

Well ...

I'm developing a test challenge that requires certain physical equipment. Before I order the equipment, I'd like to try it virtually, over email, or, more likely, chat programs.

I expect it would take 15 minutes to one hour of your time - it all depends on how deep you'd like to go. If you are interested, drop me an email - matt dot heusser @ gmail dot com.

If you have questions about it you think many people might have, please feel free to leave a comment.

UPDATE: At this time I have closed the challenge. (The early bird, as they say, gets the worm.) However, I will be doing it live, in person, with real equipment at The Software Test&Performance Conference 2009 - October in Boston. Drop by one of my sessions or shoot me an email.

Friday, August 28, 2009

Interlude



Taken from UrbanjungleComic.com, which has some funny stuff. In my version of firefox, only 4 panels - the left most two panes, top and bottom, display. Ironically, I think the joke works just fine ...

Thursday, August 27, 2009

How do we see ourselves?

And actual post to a forum I saw yesterday:

Hi
I am new to software testing and want help with learning QTP. I am based in (city) and looking for mentor.

Thanks
(name)


Now, I'm not trying to insult anyone. But imagine, for a moment, this appeared on a Carpentry forum:


Hi
I am new to carpentry and want help with learning The Hammer. I am based in Detroit and looking for mentor.

Thanks
Bob The Builder


Or perhaps plumbing:

Hi
I am new to plumbing and want help with learning The Wrench. I am based in Chicago and am looking mentor.

Thanks
Joe the Plumber


Now, I can forgive the poor English. The author probably doesn't write English as a first language, and is actually working hard to translate each word. I have to respect that.

But will learning QTP teach you to test?

Why are we so reluctant to say, as a community, that we want to get good at testing, that we want to understand and predict failure modes, that we want to get good at risk analysis and triage?

Is it because recruiters scan for buzzwords? Because 'testing' alone doesn't get us in the door, we 'need' to know Quick Test Pro, or Java, or SQL, or C#, or Fitnesse?

Don't get me wrong. Tools are important, but they are secondary. We need to change the debate.

I'm open to new and interesting ideas on how to change the debate. What do you think?

The testing renaissance

I just posted this in the Software-Testing Yahoo Group; I thought it also applies to Creative Chaos:


--- In agile-testing@yahoogroups.com, "woynam" wrote:
>
>
> It never ceases to amaze me the tremendous contribution
>that Smalltalk, and the Smalltalk community, has provided
>to our field, especially considering the small penetration
>that the language achieved.
>

Indeed. This reminds me of an old Paul Graham essay where he pointed out that the renaissance basically started in Florence, Italy.

What was it about Florence, that it generated more than it's fair share of genius's per capital? Was it something in the water? Probably not, because Florence in 1,000 AD and Florence in 1,900 AD did not have that level of success.

What was it then?

Something about the culture of collaboration and sponsorship, I think. The Florentine middle-class who were allowed to make big piles of money and keep it in the 1,200's (from silk, IIRC), went on to become upper-class a few hundred years later and sponsor artists, and, eventually, the knowledge workers, the Da Vinci's, had the opportunity to pursue a life of innovation and creation.

If you want a similar story, look into Xerox's Palo Alto Research Center (PARC), that invented the Ethernet, the Windowed Operating System, Personal Computing, and Object Oriented Programming - ideas picked up by Apple and the SmallTalk folks.

We may be reaching a similar place in software development and testing. Better yet, it can be led by practitioners who also do. My greatest concern, at this point, is this idea of Dogma and Belief, EG Agile-Testing is or is not this specific thing - without any feedback or evaluation of if that thing works for what environments and how it could be done better.

We may be getting past that. And I think that is a good day, indeed.

Regards,

--matt heusser

Monday, August 24, 2009

Last Agile 2009 Interview - Mary and Tom Poppendieck

And InformIT just published the last interview in the series, with Tom and Mary Poppendieck on lean thinking.

For a list of all my InformIT articles, you can refer to my Bio Page on Informit.com.

Who is the kid in that picture, anyway? Oh, it's me. Gosh that's old, and I haven't been active in Civil Air Patrol for years. Perhaps it's time for me to revise that old bio ...

Why I /like/ Behavior Driven Development

Long-time readers of Creative Chaos will know that I'm not a big fan of X-drive-Y processes. Sure, Test-Driven Development was great, but now we've got so many abbreviations poured on top that things are getting a bit silly. (For the record, I think X-driven-Y jumped the shark at Bacon Driven Coding, but that's just me talkin')

But, after some time of opposing Behavior Driven Development as a "bunch of nothing'", I have to admit, I do believe I was wrong. Let me explain.

Several years ago, around 2005-2006, I noticed a disturbing trend of super-isolated unit tests that weren't actually testing the return results of functions - they were instead testing what the function was doing. So your function would call int() once and printf() three times - that would be the test. According to the unit testing zealots, actually having real objects, connecting to a real database, etc - well, that was "not a unit test."

I found this resulted in real gains in /design/ of the software, but the regression test suite it produced - not so much. The suite was brittle and not good at catching bugs. I found ways to inject bugs a suite would not catch, or refactor the code so that it worked but the suite would trip an error. (For example, change from three print()s to instead build up a combined string, then just print() once. Code still works, regression test suite registers a 'failure'.)

My colleague, Sean McMillan, and I developed a few cautionary tales on this form of ultra-low-level "testing and presented it at the Google Test Automation Conference in 2007:



Our conclusion? This is interesting stuff, but we wouldn't call it "testing" - perhaps "isolation-based design" would be more accurate.

So this intersects nicely with Behavior Driven Development, specifically, the falvor of BDD that gave rise to a /behavior/ framework called RSPEC.

You see, under RSpec, you don't call it a test. You talk about the /behavior/ of the software at a low level, replacing words like "test" and "assert" with "should" and "ensure." BDD is about design and doesn't claim to be about testing.

Sound familiar?

So, I think this is huge. Some of the the BDD people took the same observations I did about low-level testing and design and did something positive about it. David Astels, I salute you.

Now, there is a different flavor of BDD, one that is higher-level, where the requirements are expressed as a specification of the form "Given ... When ... Then." If the team has objects with concrete nouns ("The customer", "A membership packet") and verbs ("Requests"), it's relatively easy to automate those tests expressed in near-English.

From what I can tell, the jury is still out on BDD at the customer level. A few people I respect (Including Chris McMahon) are doing it. I'm cautiously optimistic, in that I suspect the process will have some value, but I'm not exactly sure what.

But, hey, as you can tell from above, I've been wrong before.

So - is anyone here using BDD at the higher levels? What do you think of it? And how long have you been doing it? I'm curious who has used it for more that a couple of years in a row, and what tweaks are required in the process as it gets older.

Friday, August 21, 2009

Still More Agile-2009 Preview

Yet another article - this one an interview with Mike Cohn of MountainGoat software on user stories, Scrum, and succeeding with Agile.

Here's the interview.

So far, these interviews have been comissioned, themed around Agile 2009. But that brings up an interesting question - if I could interview anyone of note in the world of software technology (or general IT if you want to go broad), who would you like me to interview? And what would you like me to ask?

If there's enough interest, I can put a surprisingly large amount of energy into landing an interview. Just sayin' ...

Tuesday, August 18, 2009

... and another Agile 2009 Interview - Scott Ambler

The interview series about Agile2009 continues. Next, I interview Scott Ambler, asking him about his Agile Maturity Proposal, Agile Methods at IBM, the slow death of technology magazines, and more.

You can even read the complete article without any registration. Now that's refreshing!

Unfair test questions

Have you ever seen the cop shows where the good guy asks the criminal "what were you thinking?" The best answer I have ever seen is typically "Well, I wasn't thinking."

The thing is, it wasn't really a question at all - it was a statement, something like "There is no good reason for you to have done this. What is your reason? Huh? Don't I have one? I thought so."

And, while you can understand the motivation of the police officer or prosecutor for asking, I have to admit, I always feel like it's a cheap shot.

We have these questions in software testing ("Why is QA always the bottleneck?") - and I've been getting them for years. So often, in fact, that I have a few stock answers.

After discussing this with my friends at SearchSoftwareQuality.com, they asked me to write an article to cover those impossible test questions - and just published it today:

How to answer unfair test questions

(Free registration required.)

I hope you find it helpful. If you are interested in more, please consider leaving a rating on the site; it could lead to a short series.

Friday, August 14, 2009

Two Laws and a new article

I put this out in a private correspondence yesterday and thought it was worth repeating here:

Heusser's first rule of ethics: When someone ends a proposal with the statement "... and it's all legal!" they are saying that because it probably /should/ be illegal. Don't work with them.

Heusser's first law of guru-ness: To be a guru you don't actually need to be smart, insightful, or even be able to write very well. All you need is to work in a field that has high turnover and a general inferiority complex, work on a sticky meme, be single, and willing to devote your nights and weekends to self-promotion.

Let add: This doesn't mean that all people who talk about software testing or development are charlatans, crooks, liars, or not very bright. Far from it. I just mean to say that we can't sit back and suck in ideas uncritically. We'll have to actually examine the arguments about our field, hold them up to the light of day, challenge them and see if they stick. To put it differently: We have to test the ideas in software testing. I wouldn't have it any other way; would you?

Hey, speaking of gurus, Informit.com continues to publish interviews I had with speakers at the Agile2009 conference. This next one is from Gerard Meszaros, author of "xUnit Test Patterns". In it, I ask about developer-facing tests, how they relate to customer-facing tests, and the future of Agile-Testing. You can read the interview here.

My colleague, Markus Gaetner, continues to be of great help in creating and reviewing these documents. In this one, he contributed a large section of the introductory paragraph. Markus is a student of mine in the Miagi-Do School of Software Testing - which is not a paradigm but an actual School. I run Miagi-do free, non-profit and non-commerical. I have no statistics on how Miagi-Do increases job prospects or gives out raises. Instead, my students are actually /like/ to do testing and want to get better at it. More about that some other time.

Tuesday, August 11, 2009

Becoming a software testing expert

SoftwareTestingClub.com ($50 USD/year paid registration) has been having a discussion on "Becoming a testing expert" in it's forum lately. A number of the comments were very insightful and interesting. I did put out a short follow-up reply out that I thought might be helpful to Creative Chaos readers:

I've heard it said that you can tell a newbie because they want to be told what to do. You bring them in to remodel your kitchen or write your software (or maybe test it), and they ask for a spec or maybe a test plan. When this is kinda vague, they get mad at you. This is a 'contractual' worldview.

A different worldview is that you are discovering the requirements together. The craftsman doesn't ask for a spec; instead, he asks a bunch of questions, and eventually makes a protoype "is this what you want?"

The first prototype is not a solution, instead, it's designed to provoke a reaction "no, but now that I see that, I know what i really want" and the game continues until the prototype is close enough to the desired functionality for work to continue.

That's how I like to approach testing - as a collaborative risk management exercise. Does that make me an expert? Not alone, and that's really for you to decide in your own mind, anyway. But what I can tell you is the people whining about the requirements are too vague, or they should have been involved up front, or they need a test plan ... well, you can probably guess what my initial response is to that kind of rhetoric.

But that's just me talking. YMMV.


This ideal lines up with my concept of the Boutique Tester in that you have the contributor taking 'the bull' of the test process by the horns and shaping a test strategy for each engagement. It is far from complete. What do you think?

Monday, August 10, 2009

Clearing the backlog

The good news is that Informit.com has just started publishing a backlog of articles from me, including "A Chat with Alistair Cockburn."

Gosh that's an old picture of me.

Swamp-Ed

I'm afraid we've got a serious push at work, and my brain isn't getting the breathing room it needs to generate blog statements. (Not that I would have time to write them.)

In the little spare time I have, I still do a little bit of writing to relax, and I'm working on a piece about impossible questions - things like "Are we ready to ship yet?" (which you can't know, because you couldn't have completely tested the system) or "why is QA always the bottleneck?"

If you'd like to review my work before it's published, drop me an email: matt.heusser at gmail dot com, or leave a comment below with your email address.

More to come ... but probably not this week.

Friday, August 07, 2009

Let's overflow the stack - II

Last week I suggested the readers call into the stack overflow podcast and ask testing questions.

And Joel and Jeff answered them - or, at least, they answered Adam Goucher's question.

Adam's question was about hiring for testers. As an answer, Jeff and Joel mostly talked about test philosophy - what kind of person makes a good tester vs. what kind of person makes a good developer. (If you want to jump to the question, just move the time slider to 30:10.)

After answering the tester question, Jeff and Joe answer a question about when to standardize. They suggest waiting for healthy competition in the marketplace and a clear 'winner' from people actually doing it. The direct application to the ISQTB is an exercise for the reader.

For what it's worth, I don't have a problem with ISQTB or CMMI competing in the free marketplace of ideas; I object to the suggestion that the debate is /over/ and ISQTB and CMMI are "the way" to do testing or process improvement.

But seriously, check out the podcast. It's good.

Wednesday, August 05, 2009

August STPEdia is out!

Hey folks, the August STPEdia is online: Go to www.stpmag.com and click on "Download now!" in the middle-right.

This month Chris and I cover load testing, but our column was made an on-line special feature so the issue could cover the upcoming Software Test&Performance Conference in detail. You can read our column online here.

Plus, I will officially be at the Software Test&Performance Conference, Oct 19-23 at the Hyatt Regency Hotel in Boston. In fact, the full conference program is out.

Yes, I will be in Boston in the fall, along with Jon and James Bach, Scott Barber, Michael Bolton, and the rest of an amazing speakers line-up. To commemorate the occasion, I've selected a short video about going to Boston in the fall. No, really:



Can to join us?

Friday, July 31, 2009

The Meme's the thing

At the very end of yesterday's post I mentioned Yo Dawg. Yes, Yo Dawg is important, in that it's a meme.

So what's a meme(*)? It's an idea - a concept that spreads from person to person. "All Your Base Are Belong To Us", LOLCats, and De-Motivators are all memes.

What's interesting about these memes on the intarwebs is not just that people see them and laugh - or that they see them, laugh, and forward to friends. It's that they make /their own copies/ - their own riffs on the idea - and add to a body of knowledge. And, thanks to google, a great deal of them are indexed. Thanks to pagerank, the better ones come to the top.

Ideas in software development are memes. Test-Driven Development is a meme; Context-Driven Testing is a meme. Acceptance Test Driving Development is a meme.

JB Rainsberger isn't just interesting because he's contributed to the software development body of knowledge; it's because he read an XP book, never met the original authors, yet ran with the idea and helped popularize the idea - creating an entire series of events called "XP Day" all over the world. JB was infected by a meme - and took it on enough to make it his own.

Seth Godin recently spoke on Memes at the Business Of Software Conference:


---> For example, the canonical meme from the 20th century was sliced bread. For someone to use the phrase "... the best thing since sliced bread", implies that sliced bread must be pretty good to start with, no?

Here's one item from Godin's message: Some memes are destined to win, regardless of whether or not they work. Test-Automation, for example, is very appealing to developers, because automation is what they do. It's no surprise, when devs look at testing as a computer science problem, Automation is the first thing to come to mind. So we have generation after generation talking about test automation as the be all and end all of the test process, without ever having actually studied the very human, cognitive and communication process of software testing, nor having done any research on failure modes of software defects.

Thus, after the wild (and deserved) success of test-driven development, we have the acceptance-test-driven-development meme creating immediate success with much less tangible evidence.

This should be no great surprise; for twenty years Westerners have been conditioned to two similar memes - that housing prices always go up, and so does the stock market. These have near-universal appeal. Somewhere out there, right now, an investment counselor is suggesting a couple put away 20% of their income in stocks as a retirement nest egg, and a real estate broker is suggesting another couple purchase the biggest home they can afford, because, after all, salaries go up over time, right?

I believe that the communities I belong to - including the folks who read this blog - have ways to test software that are significantly better than the status quo, and we have ways to communicate them and techniques to teach them. Yet if our testing ideas are memes, we need to think about ways to package and present them to win. I believe research and experience can /help/, but often humans don't make decision rationally.

I'm not looking for labels. Agile is a label; anyone can claim it. I want memes we can grab, embrace, make our own, and share. So how can we connect our ideas to make them memes that are viral (or perhaps, "more viral")? This, I believe, is a conversation we should be having.

I do not claim to be a master of memes, except perhaps of the kind "I'm on a boat." "The Boutique Tester" is probably my most recent idea with traction. (Too bad I have no free billable hours.)

What are your ideas, and what do you think?

--heusser
UPDATE: Do you know people who quote "The Holy Grail" from Monty Python for not apparent reason? Maybe you're one of them? /That/ is a meme

(*) - The idea was popularized, and poossibly coined, by a british gentleman named
Richard Dawkins in his book The Selfish Gene.

Thursday, July 30, 2009

Let's overflow the stack

I've previously mentioned StackOverFlow.com - a website with answers to technology questions. It is a joint venture of Jeff Attwood and Joel Spolsky; two of the most popular bloggers on software development. To promote the site, the two also have a podcast where they discuss current trends in development and issues with the site.

They also answer questions; to get your questions answered, you can call in a phone number: 646-826-3879. No emails, you have to actually pick up the phone, they record your voice and put it into the podcast directly.

What does this have to do with testing?

Adam Goucher just put out a post suggesting that we get some coverage of testing on the Stack Overflow podcast by, well, calling in and asking for it.

Adam has already done his bit, so I called in and did mine:

Hello. This is Matt Heusser and I'm a tester in West Michigan. Historically, Joel has been a proponent of a 'tester' role and a little suspicious of Test Driven Development. As stackoverflow is a browser-based app with user-created content that supports a half-dozen browser variants, i'd like to ask you to describe your strategy for customer-facing tests - and maybe tell us about a few lessons you've learned along the way. Thanks.

Care to join us?

UPDATE: Yo Dawg, I herd you like testing, so I put testing in your podcast so you can test while you podcast! ( Explanation here )

Tuesday, July 28, 2009

Beautiful Testing - III

What, you haven't purchased an advance copy of Beautiful Testing yet?

I can't blame you. How do you know if it's going to be any good?

Well, one way is to read the writing of the co-authors and see if you find it valuable. I will introduce a few here ...

Adam Goucher has a nice blog post introduction to each chapter

John Cook is writing about testing a random number generator, and also has an interesting blog.

Lisa Crispin, yes, co-author of the Agile Testing Book, has her own blog.

And there's Scott Barber, plus Chris McMahon and Also Tim Reilly of the Mozilla foundation.

I hope getting the blogs for free helps you make up your mind if the book is a good investment of your time and money. Me, I've already invested a great deal of time in it, so here's hoping ...

Monday, July 27, 2009

Of "Jelled" Teams

I've been at Socialtext for about a year and a half now, and I just realized that so has everyone else on the engineering team; our two short-timers are Jeremy and Audrey, who are just coming up on one-year anniversaries.

And we actually know each other; we can give each other a hard time and actually debate ideas on merit instead of working hard to appease each other. I just read a recent note on the 37Signals Blog to that effect and it resonated with me.

And yet, given a random book on methodology or software management, you are unlikely to find anything on longevity and teamwork besides, if you are lucky, a few cliched team building exercises or perhaps a passing reference to forming, storming, norming, performing.

That's just ... sad. Perhaps it is something /I/ need to start talking about more often.

Friday, July 24, 2009

Cargo Cult Everything

ScrummerFall.
Cargo Cult Scrum.
Big Agile Up Front.
Cargo Cult Extreme Programming.
Fake Lean.

For some reason, there seem to be a lot of people who "just don't get." What's that all about?

Well, my first answer would be that our development methods are often characterized by faith. People who have a faith /believe/. Hearing failure stories is disheartening. So it's much easier to simply pry and pry until you find something you don't like, then declare the person wasn't /really/ doing Agile, or XP, or lean anyway. The technical term for this is the "No True Scotsman" fallacy, where you claim no Scotsman could have committed the crime. Then, when it turns out the criminal was Scottish, you reply "well, clearly, he isn't a true Scotsman, as no true Scotsman would commit such a crime."

But I suspect there's more to it than that.

My opinion, my very strong opinion, is that companies exist inside a system of forces. For example: Consider two software teams.

Team A works for an IT shop of a large company. The software they write is developed and released only to employees - people inside the company. The VP of accounting can send an email to the entire company that says "From now on, expenses will be tracked with our ExpenseTracker application." Additional money spent on making the software pretty will not help adoption; everyone has to use the software to get reimbursed. The small projects are relatively isolated and only have to support one browser.

Team B works for a commercial, web-based software company. They make money by viral adoption of services. They need to support every popular browser - and ever popular browser version - on the planet. The software does a lot of complex, GUI thinks that use a large amount of javascript or flash.

Now imagine you are a 'coach' with experience with company A, hired to help team B.

You come on site. You learn the process of team B.

Why, team B has this complex regression test process - what waste! We need to eliminate that, move to weekly iterations, get the logic out of the javascript so that we can test below the GUI using some sort of business-logic application.

Sure. You could try that. And fail. And someone will come along and say you are doing 'cargo cult continuous release' or "doin' it wrong" or whatever. After all, what you /really/ need is scrum, or XP, or DSDM, or ...

Wait. Stop. Please. Here's a crazy idea:

Scrum, XP, Lean, Continuius Deployment, and KanBan all evolved to solve a specific problem that your team may or may not have. They also introduce problems that may or may not be a big deal for your team.

So to understand what improvement means, we actually have to model the system of forces, come up with a strategy, simulate it, see if it will make sense, try it, then adjust it over time. That's what a great deal of this blog is about.

But Jerry Weinberg suggested we have a rule of three, and I've only come up with two explanations for ScrummerFall and the Cargo Cults. What do you think?

UPDATE: I think it's worth noting that in the example above, the consultant is not a liar, a faker, or a charlatan - he could be /wildly successful/ at Company A and yet fail at company B. As an industry, we need to equip people to cross that chasm, instead of using universal labels and best practices. (Quick litmus test for a consultant: Get to a specific recommendation, and ask when that idea might /not/ apply. If he claims it's universal, be very careful. I used to say the best practices I recommended were things like "Wear deodorant", but Ben Simo pointed out that some people are allergic to deodorant ... :-)

A year of columns!

When I came to Socialtext, I heard of number of voices that said the company had a history of problems; but the guy who was trying to bring me in was Chris Mcmahon. And Chris has a reputation for honestly and cogentiality that is simply unmatched in the field. Chris said it was a good deal; I believed him, I took the gig, and I am very glad I did so.

By working together every day, the familiarity I had with Chris soon turned into friendship. Two months into the gig, when I proposed a column for Software Test & Performance Magazine, I pitched Chris as my Co-Author. For some reason, the editor at ST&P went for it. (I suspect it was because of Chris.)

And for a year now, I've have the pleasure of putting out posts that say "hey, check out our newest column in Software Test & Performance Magazine (Link to a PDF). We are on page 9." (Occasionally, page 10.)

Well, STPMag.com just did a major site redesign, and the articles are indexed and available on-line free. Just go to http://www.stpmag.com and type in "Heusser" in the search box - or Follow This Link.

If you want to read articles older than that, you can find an index of my publications on my (old) website. Yes, it needs a good Saturday afternoon's worth of updates, but in the mean time, that should give you roughly a short novel's worth of content to enjoy. :-)

Tuesday, July 21, 2009

A brief, unfair, and wrong history of developer-testing in the 21st Century

Step 1 - Be frustrated with the process heavy and skill-free testing done on most projects
Step 2 - View testing as a clerical process to be automated
Step 3 - Ignore the input of the skilled, competent test community about what testing actually is and their experience automating it
Step 4 - Invent TDD and Automated Unit Testing, a Real Good Thing
Step 5 - Extrapolate, by the same logic, that Acceptance Tests should be automated in the same fashion, even for GUIs
Step 6 - Try It
Step 7 - Fail
Step 8 - Repeat steps 6&7 if needed
Step 9 - Realize that some GUI-driving test automation makes sense in some cases, but it is, essentially, checking, not investigating
Step 10 - Ignore the testing community, who have been saying that for ten years
Step 11 - Declare myself an expert. Try to angle for a keynote at Agile 200X.

The steps above are actually a composite of a number of people and ideas. But just maybe enough of an approximation to post ...

Speaking in Ann Arbor - July 22nd - Evening

I just signed up to speak on Agile Testing in Ann Arbor July 22nd, 2009.

The event starts at 6:15 and includes dinner for just seven bucks.

Here's my abstract:

How then, should we test?
Traditional ("waterfall") development relies on a single test/fix/retest cycle at the end of the process. Agile and iterative development implies dozens of quick iterations per year - vastly increasing the testing burden. Matt Heusser will discuss the dynamics of software testing on agile projects, some of the more popular approaches, and finally lay out how his team does testing at Socialtext, including a brief demo of some of the dev/test toolset they have development. Matt will make some bold statements about software testing that you may -- or may not - agree with. The only thing he can promise is that you'll leave the room thinking - and you certainly will not be bored

Friday, July 17, 2009

Meaningful Metrics

Recently, a few people have pointed to me as completely opposed to metrics in software development or testing.

I wouldn't say completely opposed - for example, when the technical people gather their own metrics in order to understand what is going on and improve, really good things can happen.

No, I would say concerned about the use of simplistic measures that fail to measure the entire scope of work, or or act in the place of some harder to measure thing(*), or lack construct validity(**).

Most of the ardent fans of software metrics like to quote Tom DeMacro, and his book Controlling Software Projects. For example "You can't control what you can't measure." Now, for years, I've been confused by these quotes - as DeMarco spent the second half of his career writing books that refute the premise behind that quote, such as Peopleware, The Deadline, Waltzing With Bears, Adrenaline Junkies. He even titled one of them Slack. No, I'm serious.

So I always found the copious use of that one quote a little unsettling.

Well, folks, there's good news. Twenty years after he wrote "You can't control what you can't measure", Tom DeMarco just wrote a column for IEEE software explaining his current thoughts. Here's a quote:

"My early metrics book, Controlling Software Projects played a role in the way many budding software engineers quantified work and planned their projects. In my reflective mood, I'm wondering, was its advice correct at the time, is it still relevant, and do I still believe that metrics are a must for any successful development effort? My answers are no, no, and no."

Now, DeMarco isn't saying that Metrics and Control are /bad/, as much as they may not be the brass ring we should be striving for in software work.

So what should we be striving for? DeMarco appeals to us to shoot for the big idea - the multi-billion dollar concept, where if you blow your budget by 200% it still changes the world and enables you to retire.

Ok, fair enough; that's a decent chunk of the focus of this blog. But let's go back for a moment. What metrics do I like?

Well, for a software company, I do like revenue and expenses as metrics. They are real, hard things that have construct validity, they are not in the place of something else, and without them you go out of business. But that's not the only thing I like. What if we measured the number of heart attacks and divorces our teams experienced, and expected them to go down?

Now, you might have concerns about that for legal or PC reasons (discriminating against divorced people) - but I think it's really interesting as a thought experiment - as the idea that the whole person matters. And at least one company has done it.

Bravo, Obtiva. Bravo.


--heusser
(*) - Classic example as lines of code used to approximate productivity
(**) - Not all "test cases" are created equal

UPDATE: I just have a conversation with Markus Gaertner, a german collegue I work with. He had never experienced this desire for metrics to evaluate and control that I discussed above. We talked for some time about dysfunction and he did a blog post on it. My post certainly assumed a few bits of shared understanding about metrics - and I could be wrong about my assumptions. If you don't "get it" either, let me know in comments, and I can expand.

Monday, July 13, 2009

Beautiful Testing - II

Here's the first bit of my chapter of Beautiful Testing. I'd be interested in your thoughts ...

Peeling the Glass Onion at Socialtext
"I don't understand why we thought this was going to work in the first place" - James Mathis, 2004

It's not business ... it's personal
I’ve spent my entire adult life developing, testing, and managing software projects. In those years, I've learned a few things about our field:

(1) Software Testing, as it is practiced in the field, bears very little resemblance to how it is taught in the classroom - or even described at some industry presentations
(2) There are multiple perspectives on what good software testing is and how to do it well, which means -
(3) There are no 'best practices' - no single way to view testing or do it that will allow you to be successful in all environments - but there are rules of thumb that can guide the learner
beyond that, in business software development, I would add a few things more. First, there is a sharp difference between checking[1], a sort of clerical, repeatable process to make sure things are fine, and investigating – which is a feedback-driven process.

Checking can be automated, or, at least, parts of it can. With small, discrete units, it is possible for a programmer to select inputs and compare them to outputs automatically. When we combine those units we begin to see complexity.

Imagine, for example, a simple calculator program that has a very small memory leak every time we press the clear button. It might behave fine if we test each operation independently, but when we try to use the calculator for half and hour it seems to break down without reason.

Checking can not find those types of bugs. Investigation might. Or, better yet, in this example, a static inspector looking for memory leaks.

And that’s the point. Software exposes us to a variety of risks. We will have to use a variety of techniques to limit those risks. Because there are no “best practices”, I can’t tell you what to do, but I can tell you what we have done, at Socialtext, and why we like it – what makes those practices beautiful to us.

This positions testing as a form of risk management. The company invests a certain amount of time and money in testing in order to get information - which will decrease the chance of a bad release. There is an entire business discipline around risk management; insurance companies practice it every day. It turns out that testing for it's own sake meets the exact definition of risk management. We'll revisit risk management when we talk about testing at Socialtext, but first, let's talk about beauty.

Tester remains on-stage; enter beauty, stage right
Are you skeptical yet? If you are, I can't say I blame you. To many people, the word "testing" brings up images of drop-dead simple pointing and clicking, or following a boring script written by someone else. It's a simple job, best done by simple people who, well, at least you don't have to pay them much. I think there's something wrong with that.

Again, that isn’t a picture of critical investigation – it’s checking. And checking certainly isn’t beautiful, by any stretch of the word. And Beauty is important.

Let me explain.

For my formative years as a developer, I found that I had a conflict with my peers and superiors about the way we developed software. Sometimes I attributed this to growing up in the east coast vs. the midwest, and sometimes to the fact that my degree was not in Computer Science but Mathematics[2]. So, being young and insecure, I went back to school at night and earned a Master's Degree in Computer Information Systems to "catch up", but still I had these cultural arguments about how to develop software. I wanted simple projects, whereas my team-mates wanted projects done "right" or "extensible" or "complete."

Then one day I realized: They had never been taught about beauty, nor that beauty was inherently good. While I had missed a class or two in my concentration in computer science - they also missed something I had learned in Mathematics - an appreciation of aesthetics. Sometime later I read Things a Computer Scientists rarely talks about by Dr. Donald Knuth, and found words to articulate this idea. Knuth said that mathematicians and computer scientists need similar basic skills: they need to be able to keep many variables in their head, and they need to be able to jump up and down a chain of abstraction very quickly to solve complex programs. According to Knuth, the mathematician is searching for truth - ideas that are consistently and universally correct - while the computer scientists can simply hack a conditional[3] in and move on.

But mathematics is more than that - to solve any problem in math, you simplify it. Take the simple algebra problem:

2X - 6 = 0

So we add six to each side and get 2X = 6 and we divide by two and get X=3. At every step in the process, we make the equation more simple. In fact, the simplest expression of any formula is the answer. There may be times when you get something like X=2Y; you haven't solved for X or Y, but you've taken the problem down to it's simplest possible form and you get full credit. And the best example of solving a problem of this nature I can think of is the proof.

I know, I know, please don't fall asleep on me here or skip down. To a mathematician, a good proof is a work of art - it's the stuff of pure logic, distilled into symbols[4]. Two of the highest division courses I took at Salisbury University were number theory and the history of mathematics from Dr. Homer Austin. They weren't what you would think. Number theory was basically re-creating the great proofs of history - taking a formula that seemed to make sense, proving it was true for value one. Then you provide that if any number is true, then value N+1 is true - which means the next one is true, which means ... you get it. That's called proof by induction. Number theory was trying to understand how the elements of the universe were connected - such as the Fibonacci sequence - which appears in nature on a conch shell - or how to predict what the next prime number will be, or why Pi shows up in so many places.

And, every now and again, Dr. Homer Austin would step back from the blackboard, look at the work, and just say "Now ... there's a beautiful equation." The assertion was simple: Beauty and simplicity were inherently good.

You could tell this in your work because the simplest answer was correct. When you got the wrong answer, your professor could look at your work and show you the ugly line - the hacky line - the one line that looked more complex than the one above it. He might say "Right there Matt - that's where you went off the rails[5]."

By the end of the semester, we could see it too. For that, I am, quite honestly, in his debt[6].

Of course, you can learn to appreciate beauty from any discipline that deals in abstraction and multiple variables. You could learn it from chess, or chemistry, aerospace engineering, or music and the arts[7]. My experience was that, at least in the 1990's, it was largely missing from computer science. Instead of simplicity, we celebrated complexity. Instead of focusing on value to customers, more senior programmers were writing the complex frameworks and architectures, leaving the junior developers to be mere implementers. The goal was not to deliver value quickly but instead to develop a castle in the sky. We even invented a term, "gold plating", for when a developer found a business problem too simple and had to add his own bells and whistles to the system, or, perhaps, instead of solving one problem and solving it well, could create an extensible framework to solve a much larger number of generic business problems.

Joel Spolsky[8] would call this person an "architecture astronaut", in that they get so abstract, they actually "cut off the air supply" of the business. In the back of my mind I could hear the voice of Doctor Austin saying "right there - there - is where your project went off the rails."

Ten years later, we've learned a great deal. We have a growing body of knowledge of how to apply beauty to development - O'Reilly even has a book on the subject, But testing - testing is inherently ugly, right? Aside from developer-facing testing, like TDD, testing is no fun at best and rather - have - a - tooth - pulled - with - no - anesthetic at worst, right?

No, I don't think so. In math we have this idea of prima facie evidence - that an argument can be true on it's face and not require proof. For example, there is no proof that you can add one to both sides of an equation - or double both sides - and the equation remains true. We accept this at face value - prima facie - because it's obvious. All of our efforts in math build on top of this basic prima facie (or "axiomatic") arguments [9].

So here's one for you: Boring, brain-dead, gag-me-with-a-spoon testing is /bad/ testing – it’s merely checking. And it is not beautiful. One thing we know about ugly solutions is that they are wrong; they've gone off the rails.

We can do better.




References:
[1] My colleague and friend, Micheal Bolton, is the first person I am aware of to make this distinction, and I believe he deserves a fair amount of credit for it
[2] I am a member of the context-driven school of software testing, a community of people who align around such ideas, including "there are no best practices" - www.context-driven-testing.org.
[3] Strictly speaking, I have a Bachelor's degree in Mathematics with a concentration in Computer Science. A concentration is more than a minor but less than a major, so you could argue that I’m basically a dual major - or argue that I'm not quite either one. The upshot of that was that I never took compiler construction, and, because of that, had an inferiority complex that fueled a massive amount of time and energy into learning. Overall, I'd say it could be worse.
[4] "Conditional" is a fancy word for an IF/THEN/ELSE statement block
[5] I am completely serious about the beauty of proofs. For years, I used to ask people I met with any kind of mathematics background what their favorite math proof was. Enough blank stares later and I stopped asking. As for mine, I’m stuck between two: my favorites are the proof of the limit of the sum of 1/2^N for all positive integers, or Newton's proof of integration, take your pick. (Rob Sabourin is one notable exception. I asked him his favorite, and he said he was stuck between two …)
[6] No pun on Ruby intended. I am a perl hacker.
[7] That, and Dr. Kathleen Shannon, Dr. Mohammad Mouzzam, Professor Dean Defino, and Professor Maureen Malone
[8] My co-worker and occasional writing partner, Chris McMahon has a good bit to say about testing as a performing art. You should check out … oh, wait, he left Socialtext and has his own chapter. All right, then.
[9] http://www.joelonsoftware.com/articles/fog0000000018.html

Friday, July 10, 2009

The Trick (A Rant)

If you are working within one company, doing internal development, getting user adoption is usually pretty easy. The Vice President of operations says something like:

"We wrote some software you need to process claims. Use it."

And people use it. They may not like it, but they use it.

Likewise, if you are making an application that will be /paid for/ by an executive, adoption is similarly easy. You sell the executive, he pays for it, a memo goes out that says "henceforth, all email will be done by Lotus Notes."

In both cases, you've got a monopoly.

But sometimes, you don't have a monopoly. Say you are selling software to individuals, or perhaps giving away a product or service for free in the hopes that it will be used so wildly that customer organization will want to purchase support - even if they already have have some competing product.

In that case, the words of jwz, (mild obscentiy warning after the link) - you've got make software people actually want.

It turns out, that's the trick. Make software people will actually want to use.

You say, "but Matt, that's so obvious!" - if it's so obvious, why don't more people do it?

Twitter and Facebook don't have workflow policies. They have open ended ways of helping people get stuff done.

Just something to think about.

Update: I could add that facebook might not even help you get stuff done! Yet it stuck anyway. Other updates: Beautiful Testing II coming next week. As for the scholarship, talk to the people at SoftwareTestingClub.com; they've allready got the money. :-)

Thursday, July 09, 2009

July STPMag is out -

The people at Software Test&Performance Magazine spent a considerable amount of time and effort re-designing the magazine - and it shows. The July issue is solid, and yes, our column still appears on page 10.

More than that, check out the new ST&P Website, with more to come in the months to come.

Seriously, please, check out the column and let us know what you'd like to see in future months.

Thursday, July 02, 2009

Beatiful Testing - Part I

My chapter on the book Beautiful Testing: Leading Professionals Reveal How They Improve Software is nearly complete. In fact, you can pre-order the book from Amazon right now.

But before you buy it, wouldn't you like to know what I'm going to say?

For that matter, it's just pretty expensive to be a tester right now. Better Software Magazine just stopped complimentary print delivery, Software Testing Club is going to charge a membership fee, and now Matt wants us to buy a book. I can hear the chorus of "thanks buddy" in my head, believe me. :-)

I can understand if you're skeptical. Here's what I am doing to help:

(1) All my royalties for the Beautiful Testing book will be donated to a charity - nothing but nets, that purchases mosquito nets for Africans. In fact, so will every other author for the book.

(2) The Good Lord has been good to me. I'm going to purchase TWO memberships in software testing club, and work with them to develop a competition to give the second one away.

(3) I've been working with O'Reilly, the publishers of Beautiful Testing. I can give away some of my chapter (for free) right now as a teaser, and more after publication.

Watch this space for my next post!

Monday, June 29, 2009

On Business Maturity

I just posted this to a private discussion list, and thought it was worth repeating here:

Chris McMahon wrote:
>For every company whose expensive Six Sigma project yields
>them no benefit at all, there is another company with no
>recognized quality process at all that succeeds wildly.
>


Have you ever studied Michael Porter's Competitive Strategy Model?

Porter - a Harvard Business Professor - wrote industries go through a transition from wild growth and no standards to maturity and eventual decline.


Companies competing in the growth phase compete by differentiation of /product/. (Think the personal computer market in 1984). In the middle, standardization and consolidation occur, which is happening in the personal computer market right now. At the end, toward the right, you are dealing with commodities like Gasoline or Electricity that have no differentiation at at. Companies living in maturity and decline compete through standardization of /process/ and economies of scale.

Once in a while a disruptive innovation comes along, which can push the entire industry to the left. Consider, for example, book sales in 1993. Borders and Barnes and Nobles were mature businesses. They had defined processes and metrics - and they aimed to turn the corner bookstore into a memory. If you looked at where people spent money, not what they said, nobody cared about the service at the corner bookstore - they wanted variety, comfy chairs, and decap frappechino mochas.

Then came amazon.com with a disruptive business model and a disruptive model of scale - pushing the industry to the left.

That's what lots of software does - It pushes stuff to the left.

And when you are competing on the left - if you are Apple in 1984 or Linux Torvalds in 1991 or Napster in 1998 - you don't need great software development process. You need great ideas.

That's part of what bugs me about the discussion of software maturity. The real innovation and value isn't made in the land of maturity and standards. It's made in the untamed wilderness ... hmm.

I suppose you could call that 'Creative Chaos.'



Epilogue: So that's what I wrote to the discussion list, but this is my question to Creative Chaos readers:

Is what I wrote above the case? And if so, how should that impact the way we test software?

What do you think?

Friday, June 26, 2009

Corey Haines on Metrics

Corey Haines is more a pure developer-type who does test automation, and he is heavily involved in the "Software Craftsmanship" movement. He's made a bit of a name for himself by travelling around the country, pairing and interviewing other practitioners who are serious about doing a good job in software work, with a development emphaisis.

And he just released a video on metrics ...

Road Thoughts - Visible Metrics from Corey Haines on Vimeo.



Now, these are dev-facing metrics, culled from the codebase and automated tests themselves. You wouldn't have to enter these into a spreadsheet and email them to your boss once a week, nor would your company have to pay a few thousand dollars per person for a tool to do this for you. So it has certain inherent advantages over most test metrics.

That said, I like the general idea: That a practitioner would take a series of measures in order to personally understand and improve in his work.

This is very different from many metrics discussions, where it is assumed that management will consume the metrics for the purposes of evaluation.

The former one has a good chance of working. The latter tends to introduce dysfunction, as the team will find ways to give management lots of whatever they are measured by, and this may or may not correlate to actual improvement.

Tuesday, June 23, 2009

So prove it!

When you are listening to a software development guru, do you ever get a strangle, niggling feeling in your mind? Something like "if this guy is so awesome, why doesn't he go build something awesome instead of preaching to me?"

Now, let's be fair. A lot of the people who speak publicly about software development do have day jobs and do build working software. The vast majority of them have done some software dev or testing at some point.

But consider Eric Reis of imvu. He's given talks and run seminars on continuous deployment. Yet when a few master tester's went and actually tried to use the software, they found plenty of room for improvement.

So what about that "Matt Heusser" guy. Wouldn't you like to be able to use the software he is responsible for testing?

Well, folks, I don't push Socialtext much. It's a web-based product you can use to improve communications in your business, with everything from project plans to business process to tracking status across timezones. I believe in the product, I took the risk of my career to come here - and if you really want to know about it, you'll ask.

Then, today, came the big news: Socialtext is giving away a fifty-seat license of our product. That's right, you can get a business wiki (editable web pages), blogging platform, people-tracker package, twitter-style secure micro-blogging for your business. You also get access to our web-based distributed spreadsheet currently in beta. And we support firefox 3.0x, safari, Internet Explorer 6 and 7 for everything but Socialcalc, which is FF 3.0 only.

Of course, we have a premium model with more support, more users, integration into a directory, hosted behind your firewall, and so on.

But if you want to see what Matt has been up to, you can check it out for free, right now, hosted on our severs over the web, so you'll have nothing to install:

Press Release

Media Coverage

Click here and give your email to get start.

The intention here is to give the software to businesses and small business units, so you'll want to use your work email and invite other people from your work. The license does not provide support, but if you have questions of the "ok, what is Socialtext and how can I use it" nature, I'm happy to answer and can talk you through it.

Outside of the day job, I do think Socialtext might be a good fit for a secure, invite-only network for expert testers. More to come ...

Sunday, June 21, 2009

Um ... what? - II

(Bear with me, it's worth it)

Recently, on the Agile-Testing List, I wrote:

I'm afraid we've gone so far afield that I can't remember the entire initial question. I believe it was about alternatives to 100% acceptance test automation?

As I said before, I wrote an answer but it sounded lectur-y. My experience was that there are lots and lots of different things that various organizations did to limit the risk of regression error prior to agile, especially over time as the codebase got big and old.

It seems to me that this "codebase getting old, regression testing getting expensive" is a common problem, and that the second law of thermodynamics comes into play. Systems tend to fall apart; the center does not hold. There are a variety of things one can do to limit risk. Pre-Agile some of your choices were:

- Very large releases, with a long, drawn-out "test/fix/retest cycle" toward the end. (Waterfall). ("How's that working for you?" is implied)
- Surgical Code Changes designed to limit possible ripple effect
- Taking on a larger amount of risk by having a smaller amount of test coverage, however you chose to measure it
- Getting really good at evaluating what the risks were, such that you could cover a more meaningful portion of the code in less time
- Rapid Software Testing and similar techniques designed to deal with change as the codebase grew
- Some automation, especially at file-system level
- Beta Programs designed to limit risk
- Eating our own dog food
- Model-Based Testing (see the work of Harry Robinson)

Today, the list of choices is longer and more palatable, including pair programming, TDD, Continuous Integration, ATDD, browser-driving tests of various flavors, "rolling back" production on failover on a SAAS webserver, slideshow-style tests, etc.

One thing we do know is that pre-agile, IBM, Borland, and Microsoft developed and evolved working software reasonably often. Historically, when you look at what those teams actually did - in terms of people over process, collaboration over documentation, etc, It looked a lot like an 'agile' process without the modern techniques. For the most part, those techniques were not yet available to use.

Is that what you're looking for, George?


My colleague, George Dinwiddie, a person I like and respect - replied:

Wow! You've got experience with teams that did /all/ of those things? Which of those approaches gave you the most confidence that old functionality had not been damaged by the new additions or bug fixes? Which of those approaches scaled the best for you as the applications got older?

To which I gave a final answer:

Of course I've used all those techniques at one time or another. Suffice to say it depends on your team, your risk analysis, and the constraints on the project. The answer starts to look more like a book than a post, and I've totally monopolized the list lately. (I am so sorry for that triple post!)

I'll think on it and maybe do some blog posts.


Now, take a minute and read my final reply again. Consider it, and look at it with a critical eye.

If you were an outsider to the profession, could that final answer look a bit like hand-waving? Or the previous answer where I gave the big list o' risk mitigation techniques: Couldn't that look like a list of buzzwords?

For that matter, did I refuse to answer the question at the end? Shouldn't he just press on and ask it again? And if he did press on, would I insist that he 'didn't get it'? Or maybe imply he needed to read a large collection of books before he was qualified to ask about it?

Aren't Matt Heusser's Comments above a great example of the problems he listed on Friday?

Wait, Wait ... stop. Rewind. Let's start over.

I do hold that my post above was reasonable. I don't think it crossed any lines. But to someone outside the profession, it could be misconstrued. So how do you tell the difference?

This is an important question. Let's examine the problems, one by one, and discuss about it, using the conversation outlined above as an example.

1) Appeal to Goodness, thinly disguised

I understand that Google has a saying "Don't be evil", that is a kind of shorthand. For example, if Google was considering analyzing emails for key words, then selling email addresses to spam providers by keyword, an employee might legitimately say "... but that would be evil."

That's not a label used to destroy the idea; it is a value judgement. It isn't disguised at all. And it's perfectly fine.

How can we tell these apart? Ask for the logical consequences that flow from the idea. In the example above, the speaker might say "It's not respecting the implied right to privacy."

Compare that to "... a mature organization would not behave that way."

See a difference?

2) Retreating to big words or hand-waving

I can picture myself saying "Well, we are a SAAS vendor, so we don't have a deployment problem." Now, that actually means something. I imagine many readers, familiar with my shorthand, know exactly what I mean. And some don't. How can you tell if that is shorthand or hand-waving?

Ask for examples. Just one. In that case, I would reply that SAAS is "Software As A Service." Our customers don't have to install boxes or copy CDs - they can simply rent logins to our webservers. Thus, to deploy to production, a SAAS company doesn't need to send a thousand CD's to a thousand vendors, they can simply update the production servers with a rollout script.

If the speaker doesn't give you an answer, or changes the subject, well, what could that mean?

In some cases, the speaker may simply not want to invest the time in the explanation. Ok. So ask for a link to do your own research, or, better yet - find another expert in the same field who is more helpful. Show him the transcript. Do not bias him with your assessment (that the wording is probably a bunch of nothing.) See what he comes up with.


3) Insistence that you "just don't get it"

Once again, it's possible the speaker is tired and does not want to invest the time in answering. It's also possible that the speaker realizes you have little shared agreement and would have to go back to first principles. In the example above, I was asked for best practices for risk management.

I don't believe in best practices. I belong to a community that essentially censures the term. So, after Janet Gregory wrote in to assure me I was not monopolizing, I created this reply:

Thanks Janet.

George - To speak to your question, I believe that methodology design is about trade offs. ( http://www.informit.com/articles/printerfriendly.aspx?p=434641 )

As such, I can not recommend best practices outside of a given problem domain ( http://www.context-driven-testing.com/ )

However, if you would like to hear stories of the companies I've worked for, and what constraints we had and what trade offs we chose to make - or if you want to give me a hypothetical company and work through an exercise to figure how we'd approach the problem of testing - I would be open to both of those.


I hope you can see the difference between that and "you just don't get it."

4) Insistence that the reason you don't get it is because it is "hard"

This is similar to #3 or #2 - you simply need find another expert in the field, and ask them for an analysis.

For example, I actually do know a fair amount about CMMI and software process. And, in some conversations, my BS indicator has gone off. And I've gone and asked two to five CMMI experts (SCAMPI lead appraisers) what a specific quote means.

And I get answers that are all over the map.

This tells me that the example isn't actually saying anything.

5) Abuse of the Socratic Method

The Socratic method can be a very helpful and valuable method of experiential expression. And, when the person positioning himself as the "teacher" actually has less understanding than the "learner", it can break down pretty quick. That is what I was trying to get at in my example.

So how can you tell? Well, when you answers are reasoned, sincere, correct ... and the follow-up answers begin to indicate that the other person didn't hear, wasn't listening, didn't understand, or considers them irrelevant.

It's a prickly feeling on the back of your neck. "This ain't right." On the other hand, if the person has more experience than you, the Socratic Method will feel much more free flow - for example the teacher may ask "how's the working for you?" and you grin sheepishly "not so great."

If things aren't happening like that; if the speaker can not predict your problems, and in fact does not understand them ... should he really be leading you in the Socratic method to find the solution?


6) Aggressive questioning

What if the other person genuinely wants to learn? Or what if they are asking a question that is a genuine objection to your statement? How can you tell the difference?

As I wrote in the example, Aggressive Questioning has a motive. It is a form of posturing. Now, I am very hesitant to assign intent - I prefer to work on behavior. So my short answer is it doesn't matter.

If you are being challenged, consistently, in a way that makes your blood boil, you are likely to become defensive. A defensive posture looks bad ("Here's why I should be in this meeting! ) and a defensive person is likely to make a verbal mistake.

So I recommend turning the questions into a statement from the other person you can respond to. "I'm hearing concern about risk on the project. Do I have that right?"


7) Appeal to irrelevance

By the time you are pulling an external authority out ("You know, Tom DeMarco says private offices are the way to high productivity") and the other person is ignoring or insulting those external people - you've got a problem. There's probably a trust issue involved.

It is possible that you are pulling out external authorities to prop up your argument ... because your argument needs propping up. "They don't respect me, but maybe they'll respect James Bach" goes the subtle mind-trick. How do you fix that? Ask yourself "What can I do to get the respect of my peers?" (I could do a blog post on that, if you like)

8) Changing the subject

Again, this is mostly a political maneuver, designed to replace an unpalatable talking point with a more familiar one. When could changing the subject be good? When the question itself contains an assertion, such as:

"We know you lied about the bug count at the meeting last week. Why did you do that?"

In that case, the speaker can change the subject by challenging the premise. Likewise, if the question is truly irrelevant (my neighbor asking about my sex life, my daughter asking about her Christmas gifts early), I may duck the question. It is hard for me to imagine cases like that in a software development environment.

But It can happen; here's one: An executive wants to fire the person responsible for a defect, and the team's manager knows it is Joe, but says "we all share the blame" or "as the manager, I supervise the team and am responsible for the outcome. Blame Me." (Or the manager refuses to "rank order" the staff because he believes the entire team worked extremely hard on the latest release.)

Ducking the question? I suppose. Also possibly Heroic.

Conclusions

Sometimes, it can be very helpful to take an aggressive stance and say "something ain't right here." Sometimes, an expert in the field may say "look, you've got have a basic understanding of calculus to talk about Newtonian physics. Go read a calculus book."

Sometimes, you may be so clueless - but have potential - that an elder may need to shake you a little hard to wake you up. That can even be a good thing; it's when they ignore you as irrelevant that you're really in trouble.

My previous post "Um, Er ... what" was not intended to be an excuse to whine and disengage when we are challenged. Instead, it was designed as a tool to help recognize when we're reached a point that dialogue was failing - especially when you have that moment, and realize that you might know more about the subject than they other party, and they resort to a "trick of rhetoric." I hope, between both of these posts, to have provided a balanced view on the subject.

But what do you think? And what did I miss?