Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Tuesday, October 31, 2006

Software Strategy - I

A few years back, I subscribed to a number of free industry trade mags - Software Test&Performance among them. Of the lot, ST&P is probably the only one I'm going to renew.

Granted, it's a trade magazine, which means it is paid for by adverts, which means reader beware, but there is some good information in it, and each issue seems to have a little bit more for me to dig into. This month (Free PDF here) Scott Barber has a column on a company with a very different software strategy.

Where most companies poll and wine and dine the 'decision makers' with the purse-strings, this company actually interviewed and tried to understand the users of it's software, to try to build the best product - instead of the product that was easiest to sell. They actually paid the travel expenses for the user group, had an advisory board that actually mattered, and, according to Scott, built the best product they could.

While I was reading the article, I kept noticing the time-and-energy pains that the company took to "get it right." I kept asking myself "Who has the resource - time, people, staff, to do this kind of in-depth interviewing about a software product?"

I thought that it had to be Oracle, HP/Mercury, Microsoft, audacity, or another, similar company.

And, of course, it was Microsoft.

Why does this matter? Because this is a cultural shift in software strategy. The dominant strategy of the 1980's and 90's was one that I like to call the "checklist" strategy.

Under the checklist, company X compiles a list of features that their product offers vs. The competition. Then the Product Manager meets with the manager of software development and insists that company X "has" to have all of the features the company has, but company X does not.

Then the coders write code like heck for 6 months to a year. At this point, hopefully, the marketing or product manager can make the big grid.

You've seen the big grid - it's a collection of check-boxes. Company X has all of them. The competition has ... Some of them. The sales manager creates a glossy brochure that prominently features the big grid and goes and sells thousands of units.

The big grid is designed to make the product easy to sell - see how well it stacks up against the competition? The problem is that it does not indicate if the features where implemented well, if they are useful, or even helpful to the people that will buy the tool.

In fact, the "squeeze in as many features as you can" mentality almost guarantees that the features will be junk.

My perfect example is the cell phone: Remember in 1999, when every cell-phone was advertised as running java? Could your sales person even tell you what Java was any good for? Probably not, but he knew that you had to have it, even if you interface was 10x10 characters of ASCII text.

Today the big checklist is still in full swing with cell phones - witness text messaging, camera-phone, video-phone, or internet-phone. A very large percentage of the population just wants to use cell phones as, well, phones.

My argument is that the shift needs to go further - someone could actually make a big pile of cash by selling a cell phone that was actually easy to use as a phone.

This total shift? Another company in the Pacific Northwest is doing it, with another product, called the iPod. More about that tomorrow.

No comments: