Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Monday, March 30, 2009

World Agile Qualification Board - II

When I got my first impression of this group, I was very careful to hold back public judgement. It looked really bad, but I wanted to find out more before making public statements.

Two more data points about WAQB:

1) From Lisa Crispin

2) From Michael Bolton

I've upgraded my evaluation of the WAQB. For the first time in my career, I've reached a point where I felt comfortable using a specific phrase publicly.

With regards to the World Agile Qualification Board: Please don't support these fools and scoundrels

Saturday, March 28, 2009

The World Agile Certification Board

It turns out there's a group called the world agile certification board that is going to, ta-da, certify agile developers and testers and such. You can google them; I won't link to them because I don't believe they need any more google juice.

I thought about writing a long reply, but it turns out there are two:

Elisabeth Hendrickson did a thoughtful reply earlier in the week, and, more importantly:

Tom DeMarco summed up my thoughts on certification over a decade ago in the CutterIT Journal. You can read it here.

I'm taking a hard-line here, but I'm still interested in your thoughts.

UPDATE: I've been thinking about the craft of writing and what makes one qualified to write a good book, in the context of certification. In other words: What can you learn in a three day writers workshop? What can you learn over the course of three years writing a blog? What about ten years? I can blog on it if you'd like; let me know.

Thursday, March 26, 2009

Big Testing Up Front

Hey, if I don't have time to create content, can I re-use it?

My latest post to the Agile-Testing Discussion Group:

--- In agile-testing@yahoogroups.com, Steven Gordon wrote:
Again, that is not to say the exact same activities you recommend should not be done, just that they should be viewed as primarily proactive (helping determine future completion criteria) instead of primarily reactive (retroactively changing the current completion criteria).

In some of the circles I run in, we take the idea that you should do 100% Acceptance Test Driven Development - that is - to predict all tests up front, and if you didn't predict it, it's not a defect, it's a "discrepancy between where we actually are and where we thought we would be" as Big Testing Up Front (BTUF).

Personally, I find that testing to get all the tests right up-front is a little like trying to predict all your questions right up front in the game "20 questions"; that the process of exploring the software often helps uncover issues we would not have had otherwise(1, 2).

Now, as for changing completion criteria, I agree - but the majority of defects I find in exploratory testing are of the type where the developer is told of the issue and says "Yeah, you're right, it shouldn't do that." That is to say, the teams shared mental model of how the software works was in alignment. I call that a defect, or, occasionally, a "Bug."

I understand the Extreme Programming literature had a negative reaction to big design up front. Something about how not all elements of the design could be predicted, and the design itself evolved along with the product, or something.

Can you see how BTUF looks from the testing side of the fence? (I am speaking of acceptance testing; I've see BTUF work well, often, for "pure" developer-testing.)

regards,

--heusser
(1) - This is not my original idea; see "A Practitioner's Guide to Software Test
Design", by Lee Copeland, Pg. 203.

(2) My current team has shipped working software in something like 30 out of the past 33 2-week iterations. For blocker bugs, we do not enjoy the luxury of saying "the story is done, if you want to get that fixed, make it a story for the next iteration."

UPDATE:
I am not suggesting that acceptance tests are bad. I think they are a great communication tool, and that they can streamline the dev/test process. I'm only suggesting that we set our expectations properly for acceptance tests. I focus on acceptance tests that are valuable over comprehensive. Even James Shore, "Mr. Agile Agile Agile", seems to have come around to this idea - see his misuse of FIT #5.

Tweet, Tweet, Tweet

The mini-semester starts next week and I am teaching IS 171 at Calvin College; Soccer Coaching starts next week, oh, and the whole family is sick, there's that day job thing ... as such, I don't expect to be blogging much.

However, in addition to Creative Choas, I also blog on twitter. Twitter is a "Micro Blogging" service; you posts have to be less than 140 characters - or about 14 words - long. That leaves just about enough room for an interesting URL and a comment. My username on twitter is mheusser, and I probably blog two or three times a day.

If you don't want to sign up for twitter - you don't have too. Come right back to the Creative Chaos webpage, here at xndev.blogspot.com, and look at the list to the right - there's a twitter feed! So you can read my twitter-y notes right here, just like usual.

More to come on twitter. Creative Choas might might take awhile, but I've said that before ... sometimes it's just plumb hard to stay away.

Tuesday, March 24, 2009

The psychology of (test) history

In yesterday's post, I introduced the idea of history of mathematics, and how it might be applied to testing. I followed that up with my own list of important publications that had an influence on my thinking.

But let's re-examine that for a moment. I did not create a history of the ideas in software testing - it was more a list of publications and books.

And lots of publications and books don't agree; for example, right now, today, on the Agile-Testing Yahoo Group, there is an argument about the meaning of the word test - primarily led by a PhD.

I'll say it again - We don't have consensus on the meaning of the word "test." Yet any history is going to have to pick a definition and use it. To do that, it will create winners (those who agree with the author) and losers (those who do not.) To do that, the author will have to tacitly insult some people - at least by ignoring them.

And it gets worse. The first book I read on software testing, I would call a "bad book." Oh, it gave me lots of terms like stress testing, functional testing, and load testing, but in terms of giving me ideas to change my behavior - well, it failed miserably. Yet it was a relatively early and popular book on testing in a windowed environment - should it be on the list?

What about Avionics, Embedded Systems, MILSPEC, Medical Systems, Mission and Life Critical Systems? They've developed an entire testing body of knowledge outside of my main expertise. Are they part of the history of testing?

What about The Inspection and Walkthrough literature? What about the "quality as prevention" literature? Is that testing?

How do I separate development ideas (Waterfall, Agile, Mythical Man Month) from testing? Can I? And if I can, isn't there significant information to be gained on how testing adapted to work with new development paradigms? For example, on the dev side, The solution to testing enterprise java beans turned out to be essentially ignoring the bean and creating something called a POJO - a Plain 'Ole Java Object - then having the bean serve as a wrapper around it. Most of those evolutionary stories aren't written down - at least, not in book form. To find out, I'd have to interview, then sift stories.

And to go back to what I said earlier - a list of publications and books isn't really a history, it's a collection of artifacts that are popular at a given time. Figuring out what is really going on would mean going directly back to the community. Crispin and Gregory did it in their Agile Testing book by going to people working in the field today; a real history of testing would mean going back to the people in the field third, forty, or fifty years ago. (Yes, Jerry Weinberg led an independent test team in 1958. How many Jerry Weinbergs will I find?)

Then there are developer-facing test techniques. Behavior Driven Development, for example, is a innovation and idea -- but I don't think it has much to do with what I mean when I say testing. In a history text it should probably merit a mention or footnote - but what do you do when the entire "text" needs to fit on a cheat sheet?

Then you've got the ideas before testing; the western tradition of philosophy, the Chinese, Francis Bacon and the enlightenment, the history of electical engineering, Karl Popper, the history of hardware testing, Zen And The Art of Motorcycle Maintenance. These quickly go beyond my expertise, and yet including a reference or two with some gaping holes could easily be worse than nothing.

Remember that Math Textbook I started with? I don't think it had anything in it after about 1800 - so it had the benefit of a few hundred years of evaluation by professors and academics. It could also benefit from their insight, taken over decades of studying the collaboration of, say, Newton and Leibnetz - to see who should be credited with integration, and who invented the symbol.

I suspect the reason the author stopped at the 1800's was because, it he had to pick winners and losers, he would do it with people who had long passed and been judged, to some extent, by history. To do that, the author did a great deal of scholarly research, built on top of hundreds of years of other research. He evaluated, he studied, and he thought. And, eventually, he invested thousands of hours of time and wrote a book.

Yesterday, I wrote a blog post. I hope you will agree that, to do it "right", it should be a more formal scholarly work - which it was not. I hope you'll also agree that there is more than one definition of "right" - and they conflict. We do not have consensus in our field, and a naive list of history could easily create the wrong impression.

Let's look at this in perspective: This task is huge, and daunting. To imply that it can be done by slapping a blog together is to trivialize the complexity of the task, something we are already willing to do far too often in software testing.

So let's not call it a list of ideas in test history. It is not. It is, instead, my personal view of important works (to me) - more like an F.A.Q. list, with some sense of evolution.

I'm happy with the content, but I believe it needs to be framed more accurately. Which means it needs a more accurate title. How about:

A) On the shoulders of Giants
B) How I got where I am
C) My influences in software testing
D) A fistfull of ideas about software testing
E) Dr. StrangeCode: How, how I learned to stop worrying and love Testing

What do you think? Or do you have a better idea?

More to come.

Sunday, March 22, 2009

History of Ideas in Software Testing

Yesterday I started reading Agile Testing: A Practical Guide for Testers and Agile Teams by Crispin and Gregory. Oh, it's a good book. I think the authors deserve serious applause and credit. They went out into the field and asked people what really works - and waited for multiple responses before weighing in. They had an extended review process, and they had hundreds of references. I'm impressed, and I recommend it.

But something happened when I started thumbing through those hundreds of references - I noticed that of 200 or so references, all but one were dated after the agile manifesto, which was penned in 2001. The sole exception was a paper about session based test management, written the year before.

Don't get me wrong; telling the history of software testing is simply not the Goal of "Agile Testing" Book - it is about Agile Testing, something essentially born in 2001, and it does a good job of that.

Yet the Agile Manifesto says that we are "uncovering" new ways of developing software by doing it and helping others do it. To do that, shoudn't we have a balanced sense of history?

Example: Not too many years ago, I was a young student of mathematics and took a 400-level course called "History of Mathematics." In fact, I'm pleased to see it's still on the books.

The course gave us a sense of the history of where math came from - from ol' Pythagoras through NP-Complete.

More importantly, /I/ learned a lot about how mathematicians think; how they model and solve problems - from direct proof, to proof by induction, to reducto ad absurdum. Reducto ad absurdum, for example, is really interesting: To prove something is true, assume it's false, and keep going until you find a contradiction.

But I won't bore you with math proofs; this is a testing blog.

So, if you wanted to read an article or take a course like that for software testing: A history of the ideas in the field --- where would you go?

...

time passes ...

...

Chirp, Chirp.

...

Oh, sorry, that's a cricket.

Oh, perhaps, if you were an academic, you might find a survey of the testing literature on CiteSeer.

If by some Miracle, you find one written in plain English, drop me an email. In the mean time, I'm not holding my breath. I've been thinking of developing a paper, talk, lightning talk, article, series of blog posts ... something about the history of software testing to give the newbie some idea of the ground we are covering, so we don't have to have this same discussion of "should the testers be involved up front" again and again and again.

So, let's see what we have totally and completely off the cuff:

1958+ (Buddy Holly; Elvis)
Jerry Weinberg leads project Mercury Test Team (IBM), first independent test team

1960's (The Beatles; Star Trek)
Computer Programming Fundamentals, by Herbert Leeds and Jerry Weinberg, describes software testing
PL/1 programming: A manual of style, Weinberg publishes the triangle problem for the first time


1970's - (Lynrd Skynard, Thick Ties)
Time Sharing Systems. Birth of "It works on my machine."
The Art of Software Testing, Glenford Myers
- Equivalence Classes, Boundaries, Error Guessing, Cause/Effect Graphing (It's history, good and bad) - Functional to Unit Testing
"Managing the development of Large Software Systems", Dr. Winston Royce.
Cyclometric Complexity, Thomas McCabe

1980's - (Swatch Watch)
Software Testing Techniques, Boris Beizer
Black-Box Testing Boris Beizer
- Every software program can be expressed as a directed graph of blah blah blah blah
Test Cases; V-Model System/Integration/Unit
Code, Branch, Input Coverage Metrics
Really interesting stuff in Silicon Valley; tester-as-expert mythos (The Black Team)
"Rethinking Systems Analysis and Design", Jerry Weinberg - Iterative Development and Testing described

Early 1990's - (Thin Ties)
Record/PlayBlack (Winrunner)
Testing Computer Software (Kaner, et al)
Bug Tracking and Version Control become popular
- Version Control changes the popular meaning of regression testing
ASQ-CSQE (and Deming, and Drucker, and Juran)
"Software Testing, a craftman's approach" Jorgensen, Petri Nets and State Transitions
STAR Conferences Start, 1992

Later 1990's - (Friends)
Los Altos Workshops on Software Testing Begin
When should a test be automated, Brian Marick
Test Driven Development - Extreme Programming - Becomes Popular (Beck et al, xUnit)
Customer-Driven Acceptance Tests (The XP Crew)
Exploratory Testing (Bach)
Rapid Software Testing (Bach/Bolton)
Test Automation Snake Oil, James Bach
- The Minefield Problem
Performance Testing / Web Testing Takes off
Heuristics (Bach)
"How to Break Software", James Whitaker, Quick Tests (Also, ESH popularized quick tests about this time)

Early 2000's - (The West Wing)
Session based test management (Bach)
Keyword-Driven (Coined 1999, Graham/Fewster, popularized by Linda Hayes/Worksoft Certify)
Manfiesto For Agile Software Development
Continuous Integration. (More Agile-ness)
FIT/Fitness (Ward Cunningham)
"Lessons Learned In Software Testing" (Kaner/Bach/Pettichord)
WaTIR (Pettichord et al)
ISTQB
Six Sigma
"Key Test Design Techniques", Lee Copeland. Unifies approaches to testing; popularizes the insurance problem as an alternative to the Triangle problem
Model-Driven Testing, Harry Robinson
Software Engineering Metrics, What do they measure and how do we know, Kaner

Later 2000's -
Selenium
Mocks, Stubs and Fakes.
Acceptance-Test Driven Development (Marcano and Hendrickson)
"Agile Testing", Crispin and Gregory
Faught questions the teaching value of the triangle problem
The Balanced Breakfast Strategy

Hopefully, by now, readers know that I am a "Throw stuff up against a wall and see what sticks" kind of person. This list is ugly, probably contains mistakes, and is just a start. It's tentative. It contains a history of the evolution of Agile/Context driven ideas. If your pet paper, book, or idea isn't on this list, is influential, and fits, leave a comment and it may get on the list.

The idea to is to get our comment juices going and start filling in the gaps; to help make a list that is good enough and yet comprehensible.

Then I'll turn every reference into a hyperlink, and it'll be a self-study guide. With a little more work, it might turn into an article or presentation.

What do you say; want to help me out?

UPDATE: For the purposes of this post, I'll consider functional, performance, and test management as "testing"; I may do a future, separate and distinct list for security testing or for regulated testing (FDA, MILSPEC, etc is out of scope.)

Thursday, March 19, 2009

GUI-Driving Tools

I posted this as a comment on James Bach's Blog and thought it was worth sharing here:

Not too long ago I had to give a demo of a GUI-Driving “test automation tool” at a conference as part of my speech; I wanted to show how brittle the GUI was and what problems you’d encounter. To do this, I needed to build up a non-trivial test suite with it, see how long it took me to write, what the challenges were, etc.

My wife, who has a wonderful education in liberal arts (BA in philosophy from an ivy-level college) - and has done a little testing herself - walked by and saw the icons flashing and the screen running, and said “that’s awesome.”

I suspect that explains the success of some large percentage of GUI-driving tool sales: To people only vaguely experienced with testing, having stuff fly by the screen is, after all, “driving it”, which is the same thing as testing. And it’s cool, right?

I had an HOUR to explain the real challenges with this at ST&P Conf. I had some success.

Sadly, not everybody goes to ST&PConf, or other world-class conferences like STAREast - But you can still read my column for free, every month, downloadable as a PDF from http://www.stpmag.com !

Tuesday, March 17, 2009

What's an SDET - III - Microsoft Responds

I was very pleased to hear from the "How We Test Software At Microsoft" authors in the comments section. (Just between us, I suspect it was Alan Page)

As I read it, here are the Microsoft responses:

1) Well, yes, the "second half" - an understanding of common failure modes of applications - is important to testing. But someone who can only do quick tests - without a knowledge of the business domain or deep analytical ability - is going to miss some truly important, deep bugs. So they both matter.

2) Well, yes, we have a few hundred openings for SDETs from time to time. When you consider that we have about nine thousand SDETs on staff, that's an opening rate of a couple of percentage points - overall, that's just healthy growth and turnover.


I can appreciate both of those arguments. As I tried to point out in the initial post, my problem isn't with Microsoft's policies, but the common mis-conception that enters into the populace that Microsoft views testing as a simple, straightforward business process best accomplished by computers executing code created by developers writing test automation. As I said in the second post, I find that developer-envy harmful to our craft.

Friday, March 06, 2009

What's an SDET - II

Yesterday I discussed the same test triangle - and pointed out that the test focused solely on inputs and expected results, that it was a very dev-ish test. I'd like to explain why.

When you focus on inputs and expected results, I can almost see the code. It's something like this:

my $triangle_type = get_triangle_type($sidea, $sideb, $sidec);

A strong developer can test this function in several ways. He'll test the basic equivalence classes, the boundaries, maybe measure statement coverage - maybe even branch coverage, perhaps have a bunch of negative tests like entering words instead of numbers, or a "length" of a negative number. And, in the days of MS-DOS where you had a single user sitting at a single computer typing in one value at a time, that might be just fine.

In today's modern environments, that's only half the story. Because we'll take that simple program, wrap it in a web service, then create a web-based application that calls the web service. This "second half" has an entirely different set of potential problems:

- Does it render correctly in all browsers? IE6, IE7, FF2, FF3, Safari?
- Does it look pretty? Is the user interface usable?
- What happens if I resize the browser? Does the new rendering make sense?
- If I click tab through the various inputs instead of using the mouse, does the change in order make sense?
- If I press the "ENTER" or "RETURN" key, does that trigger the submit button?
- What happens if I click "submit" twice in a row - really fast?
- What happens if, after I click submit, I click the back button? Do I go back to the main screen or do I get one of those bizarre "This page was generated dynamically do you want to re-post?" error messages?
- What if I am visually impaired? Can I turn up the font or does the Cascading Style Sheet "lock" down the user experience? If I can crank up the font is it visually appealing?
- What if I am blind? Can I use the application by a tool for the blind like Lynx? Do all of the images have "alt=" tags?
- Is the web service reasonably fast? What if it's used by 100 users all at the same time? (Note: This was never a problem on MS-DOS, where you only had one user at a time)
- Can I run the application on my 1024x600 netbook? The ads said my netbook was good "for web surfing"
- Can I run the application on my Cell Phone?
- If I come in from a chinese, korean, or italian system but I know english, does the user experience make sense?
- What if I don't know English? Should our software be localized? If yes, what localizations?

You'll notice that none of those examples has anything to do with the core code; the web service could be completely correctly but the user experience still be buggy and unusable. This "second half" of the testing equation isn't about bits and bytes. It has much more to do with critical thinking than computer science - in fact, it is it's own separate and distinct discipline.

This is why I found the triangle example less than ideal; it focuses on one type of testing and completely ignores another. There is simply no way to ask it "what happens when I resize the browser?"

Hiring developers to be testers, you tend to get developer myopia - the focus on the "first half" - code, code, and statement coverage - and less focus on that second half. I don't think I need to names to say that we've all used applications that might have done exceedingly well on the first half of testing and yet failed miserably to provide a good user experience.

Now, the Microsoft Guys claim to be doing it right. They want Software Design Engineers in Test (SDETs) who can do *both* entry-level developing *and* critical investigation of software under test - and there are some people who fit into that category. But that's like saying you want excellent sprinters who are also world-class distance runners - while they do exist, there just ain't that many of those people on the planet. Those that are here can usually find gainful employment, and not all of them want to live in Washington State, China or India. The result is that, as an employer, you'll either have to (A) Pay an relative large sum for these people, (B) Have a bunch of open positions while you look for people with the right mix, or (C) Compromise, hiring people who are, for example, good devs you think might make good testers. This runs the serious risk of developer myopia.

Last time I checked (before the tech downturn), Microsoft has a few *hundred* open SDET positions. Given a choice of compromise and an HR department that won't allow you to pay a princely sum, that's probably the best choice.

I was discussing this with my colleague, James Bach, and he wrote something I would like to repeat:

The words that you quoted [Matt talking about MS's view of testers] represent an attitude that systematically misunderstands testing as a purely (or primarily) a technical activity the object which is to produce "test cases." I too had that attitude, early in my career. I grew out of it as I came to understand, through my experiences as a test manager, that a test team becomes stronger when it hosts a variety of people with a variety of backgrounds. Through the ordinary action of teamwork, a diverse group of thinkers/learners exploits the knowledge and skills of each for the benefit of all.

My attitude about testing is deeply informed by a study of cognitive psychology, which is the study of how people think, and epistemology, which is the study of how people can know what they know. ... When you approach testing not as hot-dogging with test tools or techniques, but rather as a process of human minds encountering human artifacts and evaluating them in human terms for other humans, you eventually realize that the testing process is harmed when any one point of view or way of thinking comes to dominate it.

I would like at least one programmer on my test team. Maybe a few. In some cases (such as in testing development tools) I will need everyone to be a programmer. However, I do not treat programming ability as the center of gravity for testing. Instead I look for rapid learning, high tolerance for uncertainty, and general systems reasoning. Even then, I don't need EVERYONE to be good at those things.


I'd use different rhetoric and be less critical, but I understand what James is saying. As testers, we tend to have developer-envy. The reality is that the two skills are separate, distinct, and complementary. (Unless, say, you are testing a compiler or a debugger. Are you?)

Now, can a non-developer-tester be a more effective by picking up a little code? Absolutely. I have an undergraduate degree in Math/CS and a Master's in CIS, of which I am extremely proud - not to mention a decade with a title of developer on my business card. AND it took me years to fight my way through developer myopia to see the whole picture of software testing.

In my experience, Developers tend to think in terms of automatable business processes - when exactly what needs to be done up front isn't clear, developers claim that the requirements are "inconsistent" and refuse to program.

The whole picture of testing might include some repeatable elements, but it also includes empirical processes - which adapt through learning as they happen. This is not a straightforward business process to automate. Developer-Envy doesn't help our craft, it hurts it.

That's just my opinion right now, I'm always open to changing it or presenting it more effectively.

... And with that, I'm off with my family to a week and half of vacation in the Caribbean. I welcome your flames, er, I mean comments.

Thursday, March 05, 2009

Nifty Triangle Test Example

The triangle test is considered by many a "classic" example of a software test challenge. In fact, I believe it goes all the way back to Glenford Meyer's The Art of Software Testing - the first book ever published on software testing - and may go back as far as the testing chapter of Jerry Weinberg's book on PL/1 programming.

The problem is simple - you have three inputs for sides of a triangle, and the computer will tell you if the values you entered are scalene, isosceles, equilateral, or not a triangle.

So why am I ambivalent? Because it's a geometry problem with big words in it - and some people are simply not confident or comfortable talking about math. So you could have a very good tester who doesn't "grok" the problem, or you could have a very astute potential tester who is turned off by the exercise.

However, if you /like/ math and geometry, and you can handle it, then have I got a website for you - triangle test (which I got from Elisabeth Hendrickson via twitter) not only simulates the problem, but evaluates the answers you give as you run your test cases. It tracks the kinds of tests you run, and, when you think you are done, can 'evaluate' the scenarios you ran and give you some suggestions you may have missed.

Now, it's a canned exercise, don't take it too seriously. But for a little quick feedback fun challenge, it's hard to beat. Check it out.

Of course, the kind of tests the simulator allows you to run are constrained by the input; they are traditional send-it-different-input kinds of tests. That's a very developer-ish way of looking at testing, which ties back to what happens when you have an SDET worldview. More on that tomorrow.

Grand Rapid's Tester's Round Table

I'm organizing an informal discussion of software testing in West Michigan. I've planned the first meeting Monday, April 27th at Calvin College at lunch - 12:00-1:00.

The focus will be on testing as an investigative, critical thinking activity. Discussion of testing as a design activity ("TDD") or as a communication tool ("Acceptance Test Driven Development") is welcome but is not the focus. Test-infected developers and formal Software Quality Assurance professionals may be interested as well.

The event is vaguely inspired by the Indianapolis Workshop On Software Testing. For the time being, though, "It's just lunch." with a bunch of like-minded people.

Seating is limited and by invitation only. If you'd like an invitation, please drop me an email privately - matt.heusser@gmail.com.

If you don't live in West Michigan and don't have a club to go to, I would encourage you to start your own informal testing group. You don't need a charter or legal framework to get four or five people together for lunch. If you want a charter or legal framework, though, there are groups like the QAI Worldwide that offer a chapters program.

Tuesday, March 03, 2009

The Security Issue

We just wrapped up an interview for the Security issue of Software Test and Performance.

Gosh, I wish I could use this:



(For this and similar web comics, see xkcd.com )

How to be a first-class citizen as a Tester

Let's say you don't get involved "up front."

Let's say you are not invited to the big meeting.

No, you don't have executive buy-in, nor are you the process police. You can't tell the developers what to do.

Instead, at the tail end of the process, someone gives you a CD or website or rules to build you own web-server - and asks you to test it. They seem, for some reason, to view this as a reasonable thing.

So you're sitting with the app, testing it. Until you come up with some bugs, the devs, project management, and possibly even an executive may have little to do. They may hang out by your cube, complaining that QA is the "bottleneck."

Then you have that moment. You're looking at the app and you realize, even before you ue it, that a particular piece of code is probably broken. You speak quickly to the dev, PM, and Vice President: "You see that button - right there? I bet that if I push it, the whole app crashes."

The Dev smiles and points out that he wrote a unit test for exactly that functionality. The PM points out you have a bad attitude, and the Vice President asks "why wouldn't that work? Of course it will work."

So you say "well, I don't know the underlying architecture, but I know a little bit about common failure modes, and I just don't expect that to work. Let's just press the button and find out, shall we?"

Push. Crash.

It may take years for you, as a tester, to get to that moment. However, I submit that once it happens, it'll begin to happen more frequently.

And, all of a sudden, you've gone from "verification/bottleneck guy" to "person who can predict failures."

Do it twice in a row, and suddenly you are a first-class citizen in the development community - and people invite you to the big meetings.

Predicting failure is just one example, but it fits my premise well - if you want to be a first-class citizen, one way to do it is by focusing on demonstrating value and competence.

At least, that's been my experience. What has worked for you? Please leave a comment and sharpen my thinking.

Monday, March 02, 2009

Yay! March ST&P is online!

Well that didn't take long. The March Issue of Software Test And Performance Magazine is available on-line for free. Download it here. Our column appears on page nine.

He's not dead yet (I'm feeling better)

What could possibly happen to Matt Heusser that he wouldn't blog for a couple weeks?

Well, try the flu - which found it's way into my lungs and became bronchitis.

After a week or so, the bronchitis was gone yet I still could not breathe (easily). It turns out my lungs were not recovering - they remained inflamed - which means they could not contain enough oxygen. My breathing was basically ragged, shallow, and constant, and I became winded walking up a set of stairs.

As you can guess, I let Creative Chaos languish a bit.

So, what's happened since?

(1) I've given more thought to "what's an SDET" - details to come

(2) My copy of the the March issue of Software Test & Performance Magazine came in the mail. Unfortunately, the newest issue isn't yet available for download on the web. Expect a link in a few days. I'm particularly proud of our column on web performance management - I think we nailed it.

(3) The second person has passed my test challenge - Laura Vayansky. Congratulations Laura!

(4) I've been having an extended email conversation with Janet Gregory, one of the co-authors of Agile Testing: A Practical Guide for Testers and Agile Teams. One of Janet's concern is that testers are too-often viewed as, in her words "second class citizens." I've been thinking about it, and I have at least one sure-fire way for testers to make themselves first-class citizens, without having "executive buy-in" or "being involved early in the process" or even any formal "process improvement." Wait for it tomorrow!