Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Sunday, June 21, 2009

Um ... what? - II

(Bear with me, it's worth it)

Recently, on the Agile-Testing List, I wrote:

I'm afraid we've gone so far afield that I can't remember the entire initial question. I believe it was about alternatives to 100% acceptance test automation?

As I said before, I wrote an answer but it sounded lectur-y. My experience was that there are lots and lots of different things that various organizations did to limit the risk of regression error prior to agile, especially over time as the codebase got big and old.

It seems to me that this "codebase getting old, regression testing getting expensive" is a common problem, and that the second law of thermodynamics comes into play. Systems tend to fall apart; the center does not hold. There are a variety of things one can do to limit risk. Pre-Agile some of your choices were:

- Very large releases, with a long, drawn-out "test/fix/retest cycle" toward the end. (Waterfall). ("How's that working for you?" is implied)
- Surgical Code Changes designed to limit possible ripple effect
- Taking on a larger amount of risk by having a smaller amount of test coverage, however you chose to measure it
- Getting really good at evaluating what the risks were, such that you could cover a more meaningful portion of the code in less time
- Rapid Software Testing and similar techniques designed to deal with change as the codebase grew
- Some automation, especially at file-system level
- Beta Programs designed to limit risk
- Eating our own dog food
- Model-Based Testing (see the work of Harry Robinson)

Today, the list of choices is longer and more palatable, including pair programming, TDD, Continuous Integration, ATDD, browser-driving tests of various flavors, "rolling back" production on failover on a SAAS webserver, slideshow-style tests, etc.

One thing we do know is that pre-agile, IBM, Borland, and Microsoft developed and evolved working software reasonably often. Historically, when you look at what those teams actually did - in terms of people over process, collaboration over documentation, etc, It looked a lot like an 'agile' process without the modern techniques. For the most part, those techniques were not yet available to use.

Is that what you're looking for, George?


My colleague, George Dinwiddie, a person I like and respect - replied:

Wow! You've got experience with teams that did /all/ of those things? Which of those approaches gave you the most confidence that old functionality had not been damaged by the new additions or bug fixes? Which of those approaches scaled the best for you as the applications got older?

To which I gave a final answer:

Of course I've used all those techniques at one time or another. Suffice to say it depends on your team, your risk analysis, and the constraints on the project. The answer starts to look more like a book than a post, and I've totally monopolized the list lately. (I am so sorry for that triple post!)

I'll think on it and maybe do some blog posts.


Now, take a minute and read my final reply again. Consider it, and look at it with a critical eye.

If you were an outsider to the profession, could that final answer look a bit like hand-waving? Or the previous answer where I gave the big list o' risk mitigation techniques: Couldn't that look like a list of buzzwords?

For that matter, did I refuse to answer the question at the end? Shouldn't he just press on and ask it again? And if he did press on, would I insist that he 'didn't get it'? Or maybe imply he needed to read a large collection of books before he was qualified to ask about it?

Aren't Matt Heusser's Comments above a great example of the problems he listed on Friday?

Wait, Wait ... stop. Rewind. Let's start over.

I do hold that my post above was reasonable. I don't think it crossed any lines. But to someone outside the profession, it could be misconstrued. So how do you tell the difference?

This is an important question. Let's examine the problems, one by one, and discuss about it, using the conversation outlined above as an example.

1) Appeal to Goodness, thinly disguised

I understand that Google has a saying "Don't be evil", that is a kind of shorthand. For example, if Google was considering analyzing emails for key words, then selling email addresses to spam providers by keyword, an employee might legitimately say "... but that would be evil."

That's not a label used to destroy the idea; it is a value judgement. It isn't disguised at all. And it's perfectly fine.

How can we tell these apart? Ask for the logical consequences that flow from the idea. In the example above, the speaker might say "It's not respecting the implied right to privacy."

Compare that to "... a mature organization would not behave that way."

See a difference?

2) Retreating to big words or hand-waving

I can picture myself saying "Well, we are a SAAS vendor, so we don't have a deployment problem." Now, that actually means something. I imagine many readers, familiar with my shorthand, know exactly what I mean. And some don't. How can you tell if that is shorthand or hand-waving?

Ask for examples. Just one. In that case, I would reply that SAAS is "Software As A Service." Our customers don't have to install boxes or copy CDs - they can simply rent logins to our webservers. Thus, to deploy to production, a SAAS company doesn't need to send a thousand CD's to a thousand vendors, they can simply update the production servers with a rollout script.

If the speaker doesn't give you an answer, or changes the subject, well, what could that mean?

In some cases, the speaker may simply not want to invest the time in the explanation. Ok. So ask for a link to do your own research, or, better yet - find another expert in the same field who is more helpful. Show him the transcript. Do not bias him with your assessment (that the wording is probably a bunch of nothing.) See what he comes up with.


3) Insistence that you "just don't get it"

Once again, it's possible the speaker is tired and does not want to invest the time in answering. It's also possible that the speaker realizes you have little shared agreement and would have to go back to first principles. In the example above, I was asked for best practices for risk management.

I don't believe in best practices. I belong to a community that essentially censures the term. So, after Janet Gregory wrote in to assure me I was not monopolizing, I created this reply:

Thanks Janet.

George - To speak to your question, I believe that methodology design is about trade offs. ( http://www.informit.com/articles/printerfriendly.aspx?p=434641 )

As such, I can not recommend best practices outside of a given problem domain ( http://www.context-driven-testing.com/ )

However, if you would like to hear stories of the companies I've worked for, and what constraints we had and what trade offs we chose to make - or if you want to give me a hypothetical company and work through an exercise to figure how we'd approach the problem of testing - I would be open to both of those.


I hope you can see the difference between that and "you just don't get it."

4) Insistence that the reason you don't get it is because it is "hard"

This is similar to #3 or #2 - you simply need find another expert in the field, and ask them for an analysis.

For example, I actually do know a fair amount about CMMI and software process. And, in some conversations, my BS indicator has gone off. And I've gone and asked two to five CMMI experts (SCAMPI lead appraisers) what a specific quote means.

And I get answers that are all over the map.

This tells me that the example isn't actually saying anything.

5) Abuse of the Socratic Method

The Socratic method can be a very helpful and valuable method of experiential expression. And, when the person positioning himself as the "teacher" actually has less understanding than the "learner", it can break down pretty quick. That is what I was trying to get at in my example.

So how can you tell? Well, when you answers are reasoned, sincere, correct ... and the follow-up answers begin to indicate that the other person didn't hear, wasn't listening, didn't understand, or considers them irrelevant.

It's a prickly feeling on the back of your neck. "This ain't right." On the other hand, if the person has more experience than you, the Socratic Method will feel much more free flow - for example the teacher may ask "how's the working for you?" and you grin sheepishly "not so great."

If things aren't happening like that; if the speaker can not predict your problems, and in fact does not understand them ... should he really be leading you in the Socratic method to find the solution?


6) Aggressive questioning

What if the other person genuinely wants to learn? Or what if they are asking a question that is a genuine objection to your statement? How can you tell the difference?

As I wrote in the example, Aggressive Questioning has a motive. It is a form of posturing. Now, I am very hesitant to assign intent - I prefer to work on behavior. So my short answer is it doesn't matter.

If you are being challenged, consistently, in a way that makes your blood boil, you are likely to become defensive. A defensive posture looks bad ("Here's why I should be in this meeting! ) and a defensive person is likely to make a verbal mistake.

So I recommend turning the questions into a statement from the other person you can respond to. "I'm hearing concern about risk on the project. Do I have that right?"


7) Appeal to irrelevance

By the time you are pulling an external authority out ("You know, Tom DeMarco says private offices are the way to high productivity") and the other person is ignoring or insulting those external people - you've got a problem. There's probably a trust issue involved.

It is possible that you are pulling out external authorities to prop up your argument ... because your argument needs propping up. "They don't respect me, but maybe they'll respect James Bach" goes the subtle mind-trick. How do you fix that? Ask yourself "What can I do to get the respect of my peers?" (I could do a blog post on that, if you like)

8) Changing the subject

Again, this is mostly a political maneuver, designed to replace an unpalatable talking point with a more familiar one. When could changing the subject be good? When the question itself contains an assertion, such as:

"We know you lied about the bug count at the meeting last week. Why did you do that?"

In that case, the speaker can change the subject by challenging the premise. Likewise, if the question is truly irrelevant (my neighbor asking about my sex life, my daughter asking about her Christmas gifts early), I may duck the question. It is hard for me to imagine cases like that in a software development environment.

But It can happen; here's one: An executive wants to fire the person responsible for a defect, and the team's manager knows it is Joe, but says "we all share the blame" or "as the manager, I supervise the team and am responsible for the outcome. Blame Me." (Or the manager refuses to "rank order" the staff because he believes the entire team worked extremely hard on the latest release.)

Ducking the question? I suppose. Also possibly Heroic.

Conclusions

Sometimes, it can be very helpful to take an aggressive stance and say "something ain't right here." Sometimes, an expert in the field may say "look, you've got have a basic understanding of calculus to talk about Newtonian physics. Go read a calculus book."

Sometimes, you may be so clueless - but have potential - that an elder may need to shake you a little hard to wake you up. That can even be a good thing; it's when they ignore you as irrelevant that you're really in trouble.

My previous post "Um, Er ... what" was not intended to be an excuse to whine and disengage when we are challenged. Instead, it was designed as a tool to help recognize when we're reached a point that dialogue was failing - especially when you have that moment, and realize that you might know more about the subject than they other party, and they resort to a "trick of rhetoric." I hope, between both of these posts, to have provided a balanced view on the subject.

But what do you think? And what did I miss?

1 comment:

James Marcus Bach said...

I'm impressed, Matt. I had a lot of trouble with your first installment of this, as you know from our private conversation. You have directly, and insightfully, dealt with my concerns.

Thank you.

-- james