Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Friday, March 06, 2009

What's an SDET - II

Yesterday I discussed the same test triangle - and pointed out that the test focused solely on inputs and expected results, that it was a very dev-ish test. I'd like to explain why.

When you focus on inputs and expected results, I can almost see the code. It's something like this:

my $triangle_type = get_triangle_type($sidea, $sideb, $sidec);

A strong developer can test this function in several ways. He'll test the basic equivalence classes, the boundaries, maybe measure statement coverage - maybe even branch coverage, perhaps have a bunch of negative tests like entering words instead of numbers, or a "length" of a negative number. And, in the days of MS-DOS where you had a single user sitting at a single computer typing in one value at a time, that might be just fine.

In today's modern environments, that's only half the story. Because we'll take that simple program, wrap it in a web service, then create a web-based application that calls the web service. This "second half" has an entirely different set of potential problems:

- Does it render correctly in all browsers? IE6, IE7, FF2, FF3, Safari?
- Does it look pretty? Is the user interface usable?
- What happens if I resize the browser? Does the new rendering make sense?
- If I click tab through the various inputs instead of using the mouse, does the change in order make sense?
- If I press the "ENTER" or "RETURN" key, does that trigger the submit button?
- What happens if I click "submit" twice in a row - really fast?
- What happens if, after I click submit, I click the back button? Do I go back to the main screen or do I get one of those bizarre "This page was generated dynamically do you want to re-post?" error messages?
- What if I am visually impaired? Can I turn up the font or does the Cascading Style Sheet "lock" down the user experience? If I can crank up the font is it visually appealing?
- What if I am blind? Can I use the application by a tool for the blind like Lynx? Do all of the images have "alt=" tags?
- Is the web service reasonably fast? What if it's used by 100 users all at the same time? (Note: This was never a problem on MS-DOS, where you only had one user at a time)
- Can I run the application on my 1024x600 netbook? The ads said my netbook was good "for web surfing"
- Can I run the application on my Cell Phone?
- If I come in from a chinese, korean, or italian system but I know english, does the user experience make sense?
- What if I don't know English? Should our software be localized? If yes, what localizations?

You'll notice that none of those examples has anything to do with the core code; the web service could be completely correctly but the user experience still be buggy and unusable. This "second half" of the testing equation isn't about bits and bytes. It has much more to do with critical thinking than computer science - in fact, it is it's own separate and distinct discipline.

This is why I found the triangle example less than ideal; it focuses on one type of testing and completely ignores another. There is simply no way to ask it "what happens when I resize the browser?"

Hiring developers to be testers, you tend to get developer myopia - the focus on the "first half" - code, code, and statement coverage - and less focus on that second half. I don't think I need to names to say that we've all used applications that might have done exceedingly well on the first half of testing and yet failed miserably to provide a good user experience.

Now, the Microsoft Guys claim to be doing it right. They want Software Design Engineers in Test (SDETs) who can do *both* entry-level developing *and* critical investigation of software under test - and there are some people who fit into that category. But that's like saying you want excellent sprinters who are also world-class distance runners - while they do exist, there just ain't that many of those people on the planet. Those that are here can usually find gainful employment, and not all of them want to live in Washington State, China or India. The result is that, as an employer, you'll either have to (A) Pay an relative large sum for these people, (B) Have a bunch of open positions while you look for people with the right mix, or (C) Compromise, hiring people who are, for example, good devs you think might make good testers. This runs the serious risk of developer myopia.

Last time I checked (before the tech downturn), Microsoft has a few *hundred* open SDET positions. Given a choice of compromise and an HR department that won't allow you to pay a princely sum, that's probably the best choice.

I was discussing this with my colleague, James Bach, and he wrote something I would like to repeat:

The words that you quoted [Matt talking about MS's view of testers] represent an attitude that systematically misunderstands testing as a purely (or primarily) a technical activity the object which is to produce "test cases." I too had that attitude, early in my career. I grew out of it as I came to understand, through my experiences as a test manager, that a test team becomes stronger when it hosts a variety of people with a variety of backgrounds. Through the ordinary action of teamwork, a diverse group of thinkers/learners exploits the knowledge and skills of each for the benefit of all.

My attitude about testing is deeply informed by a study of cognitive psychology, which is the study of how people think, and epistemology, which is the study of how people can know what they know. ... When you approach testing not as hot-dogging with test tools or techniques, but rather as a process of human minds encountering human artifacts and evaluating them in human terms for other humans, you eventually realize that the testing process is harmed when any one point of view or way of thinking comes to dominate it.

I would like at least one programmer on my test team. Maybe a few. In some cases (such as in testing development tools) I will need everyone to be a programmer. However, I do not treat programming ability as the center of gravity for testing. Instead I look for rapid learning, high tolerance for uncertainty, and general systems reasoning. Even then, I don't need EVERYONE to be good at those things.


I'd use different rhetoric and be less critical, but I understand what James is saying. As testers, we tend to have developer-envy. The reality is that the two skills are separate, distinct, and complementary. (Unless, say, you are testing a compiler or a debugger. Are you?)

Now, can a non-developer-tester be a more effective by picking up a little code? Absolutely. I have an undergraduate degree in Math/CS and a Master's in CIS, of which I am extremely proud - not to mention a decade with a title of developer on my business card. AND it took me years to fight my way through developer myopia to see the whole picture of software testing.

In my experience, Developers tend to think in terms of automatable business processes - when exactly what needs to be done up front isn't clear, developers claim that the requirements are "inconsistent" and refuse to program.

The whole picture of testing might include some repeatable elements, but it also includes empirical processes - which adapt through learning as they happen. This is not a straightforward business process to automate. Developer-Envy doesn't help our craft, it hurts it.

That's just my opinion right now, I'm always open to changing it or presenting it more effectively.

... And with that, I'm off with my family to a week and half of vacation in the Caribbean. I welcome your flames, er, I mean comments.

5 comments:

Anonymous said...

I really enjoyed the post. I have been guilty of trying to go to far to the programming side at times. But I love exploratory testing so it usually pulls me back to the middle. I agree the good mix on a test team is a great situation. Let the automation handle the technical details and get creative with real testing.

Jeremy

Anonymous said...

Your initial statements are equally valid when turned around - i.e. I can verify that the web page displays in numerous browsers and resolutions and that it's highly usable, but never verify that it actually calculates what a triangle is.

You are absolutely correct in saying that it's difficult to have testers who can both think "at the code level" and who can evaluate the system for stakeholders. The comparison of basketball to golf, however, is quite a bit of a stretch. It's more of a comparison of sprinters and distant runners. Not everyone can do both, but we've found about 7 or 8 thousand testers that can think at the code level and who are world class testers (note that we have 9500 testers at MS - details on my discrepancy will require beer).

For what we do, and for our requirements, we need people who can do both. What I've found, is that it's easier to find people who are great testers from the pool of people who already know how computers work. This is based on hundreds of interviews, and interactions with thousands of testers. There are absolutely exceptions (I'm one), and we have programs to find great testers from outside the CS field as well.

I know I'm just fanning a fire here that I have no control over - but I do find the speculation and assumptions on your part a bit out of character (perhaps you really *need* that vacation).

Starting from "Hiring developers to be testers..." you're pretty much making stuff up as far as I can tell - and I'm saying this knowing that we have worked together and respect each other. I do know that there's this view that MS is screwing up their testing because they're hiring CS folks instead of "real" testers and that's the cause of our quality problems, but believe me - the source of the quality problems is elsewhere (and outside the scope of this comment).

I suppose the fact that some people don't get it is something I should worry about. I try not to, but I do worry about the public face of testing at MS. I also realize that some folks will never get it, and I worry less about them.

From you Matt, however, I expected better insight.

Matthew said...

While I was disappointed by comments from hwtsam, I do agree that the metaphor was a bit of a strech, and I have changed it to be the sprinter/distance runner as requested. Otherwise, I'm not quite sure where I speculated or what was so offensive.

Anonymous said...

Not offensive Matt - just speculative (I think I may have read too much of your Bach quote into your opinion).

Our best testers don't have developer envy at all. In fact, in some cases, we have developers move from dev to test "because the challenges in test are more exciting". There are probably more going the other way, but it's usually a case of "the grass is always greener" rather than envy.

You seem to imply (whether intentional or not) that developer/testers are unable to perform empirical process and learning - or maybe you're just saying that it's important and you're not sure if it's included - it's unclear to me.

Then again, maybe it is just the Bach quote that sets me off.

A minor nit is that we have MS R&D sites in about 15 different states and at least as many countries, so it's not just Redmond, China and India. Off the top of my head, I'd say as much as 20% of MS R&D happens outside of those locations.

Finally, you make a big deal out of the hundreds of open positions. I don't know the number, but let's say it's 400. One thing to note is that there are about 10,000 testers at MS, so 300 is like having 4 open positions on a test team of 100 - not that significant, and not significant enough to imply that the positions are exceptionally difficult to field

Lisa said...

Cool post, Matt! Thought-provoking like most of your posts are. I started as a developer although without the cred you have. Interesting to see people go both ways - my most recent fellow tester is now a Java developer on our team. But his testing ability and habit of talking constantly to customers makes him more valuable IMO.

In addition to what you said, I think curiosity, willingness to get out of comfort zone, desire for good craftsmanship, are all good qualities too.

Developer envy, well, it's true I wish I could code Java, but I have the same learning block with it as I had with Assembler. But, can understand it enough. Someday I hope to do some pairing on writing production code, just for information.