Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Wednesday, November 19, 2008

More insight from Joel Spolsky

Then suddenly I noticed (shock!) that not only was the author a journalist, not a scientist, but he was actually an editor at Time Magazine, which has an editorial method in which editors write stories based on notes submitted by reporters (the reporters don't write their own stories), so it's practically designed to get everything wrong, to insure that, no matter how ignorant the reporters are on an issue, they'll find someone who knows even less to write the actual story.
- From JoelonSoftware.com

In the article, Joel is knocking a style of journalism where you start with an interesting anecdote and use it to prove a point about something which you have no real expertise in.

But let's flip it around, and say a journalist was evaluating traditional development methods:

... which has a programming method in which programmers code stories based on notes written by designers that are based on requirements documents created by analysts that are assessments of what the customer actually wants. It's practically designed to get everything wrong, to insure that, no matter how ignorant the analysts and architects are on an issue, they'll find someone who knows even less to write the actual code ...

Yes, many shops to better than this. Yes, agile has helped. But before we throw stones, many development houses might be better off tending to our own knitting ...

2 comments:

Shrini Kulkarni said...

Bravo ...

Matt, I really liked the way you flipped Joel's original story about journalism to software ...

I will quote this many places now on.
It is very true that software is inherently designed to go wrong.

There are human learning/skill elements and other cognitive constraints that make the process of information exchange from one to another (analyst to designer to developer to tester).

If we understand this basic constraint in our software development model - we will be better equipped to face the challenges of software ... I think..

Anonymous said...

In the traditional development methods, the following things happen (in the rough sequence given):

1. The business requirements are captured by the analysts. Assumption: Analysts can draw out hidden requirements and document the requirements completely in an unambiguous, non-conflicting and specific way.

2. The business requirements are converted into (hopefully, feasible) technical requirements by the designer. Assumption: The designers can document technical solutions to all (feasible) business requirements.

3. The testers test the application with respect to all technical requirements and all business requirements. Assumption: All requirements are covered in the test.

If any of the above assumptions is falsified, the team gets it wrong. As implied by you, this "division of labor" has scope for errors. I think the following things might mitigate this risk for getting it wrong:

1. If feasible, let an individual play multiple roles in the SDLC e.g. a person performs the role of an analyst, designer and developer, the analyst captures the requirements and performs the testing.

2. Thoroughly review the output of one phase before accepting it in the next phase e.g. the business requirements are thoroughly explained to the designer by the analyst when the latter hands over the requirements document to the former.