"The foo feature is Broken"
"Widgets are completly Horked"
"Gadgets are FuBar under IE7"
"FF3 and the wiki no work-ey"
If you talk to any developer, any PM, or read the testing literature, you'll find these are bad descriptions, because they don't tell the reader what the actual problem is, or how to reproduce it.
A "bug" that can't be reproduced is a bug that can't get fixed, and a great way to annoy PM and Devs. This is true; most educated testers strive to provide meaningful bug reports.
Yet if you look through my own bug reports, now and again, you'll see these type of descriptions. Why do I log such things, and what does it mean?
At Socialtext (and to me personally), 'Broken' means that the entire feature is so messed up that you can tell simply by looking at it. It doesn't render properly, or, if the feature requires a submit, it is impossible to get any successful result using any input.
You don't need reproduction steps - you simply need to try to use the software. Whatever you do, it won't work.
In my experience, Testers specialize in exploring the nooks and crannies of the application. We try to find the defects before the customers do. If the feature is broken, exploring is a waste of time -- nothing works, probably because of one single root cause. The thing was never sanity tested; it was never even /poked at/ before being delivered to QA.
Defined and phrased in this way, reporting a feature as broken is not a QA failure; it is a development failure.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com