Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, February 19, 2009

What's an SDET, Again?

I just got my copy of How We Test Software at Microsoft in the mail. Weighing in at 420 pages, it will be awhile before I can digest the whole thing.

One of the more ... interesting things about the Microsoft test culture is the insistence that the Software Development Engineer in Test, or SDET, be a fully-qualified developer. This has causes some degree of confusion; Developer-Types and Agile Advocates say things like "All Microsoft testers write automaton all the time" or "All tests should be automated" or "Microsoft views testing as a automation activity." (Don't believe me? I was challenged on it in an interview just last week.)

So here's what the Microsoft guys have to say, straight from the horse's mouth:

The concept of hiring a software engineer with a passion for testing is powerful and is the biggest differentiator between the Microsoft approach to software testing and the typical industry approach. The most common conclusion drawn is that we hire these "coders" for test positions because we want to automate everything and eliminate manual testing. Although we do want tester who can write effective automation, that's only a small part of the equation. Testers who understand programming concepts and computer architecture typically have the analysis skills that testing requires. They can also find bugs earlier and understand the root cause so that they can quickly find other similar bugs and implement early detection. This strong grounding in computer science - the same ground a developer has - reinforces the tester skills and gives us a more dynamic and flexible workforce of testers. - Page 23

In my words, an SDET who is at least a qualified entry-level developer will be able to understand things like signed/unsigned errors, buffer overflows, and be able to test tools like a compiler or linker more effectively.

By making that tradeoff, Microsoft gets certain benefits, and also, due to the law of supply and demand, pays a little more for it's testers. I suppose I don't have a problem with a company that makes compilers and linkers wanting to make that distinction.

But let's not jump to the conclusion that all tests at Microsoft run unattended at the push of a button. In fact, on page 220, the authors give a guide about how they make the decision to automate or not. The biggest problem they list with test automation - or "unattended test execution" as I'd call it - is that test automation can give false error messages. These error messages are usually the result of a change in the configuration or a problem with the test set-up. Those tests need to be re-run, observed, the problem tracked down ... and all this takes time and attention.

No comments: