This month's Software Quality Professional claims on page 51 that:
By having defined coding standards, developers trained in the use of the those standards are less likely to make certain coding errors.
The one thing coding standards guarentee is consistency, and, arguably, readability. But less errors? I grant that in theory, coding standards can prevent errors. For example, "Don't use global variables", "Every function should have an automated test" or "In perl, use auto-indexing in for loops instead of C-style ++" - something like that can decrease errors.
Then again, those are often best learned through mentoring and good craftsmanship, not code standards. Most of the code standards I have seen obsess over where to place the curly braces, what to name the variables, and how many spaces to indent.
In fact, I have seen so-called fagan-style reviews that focused entirely on that kind of slavish adherance to standard; hours spent without a single defect found that would actually impact a customer.
This is couched inside an editorial, not a journal paper, so I give the author a little wiggle room, but here's my suggestion: If you want to make a statement like this in a professional journal, either provide a lot of supporting evidence, or be honest. "In my experience" is a great way to be honest; failing that, give at least one tangible example. Otherwise, we run the risk of coming off disconnected and enterprisy.
Is that too much to ask?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Tuesday, December 12, 2006
Subscribe to:
Post Comments (Atom)
1 comment:
Yes, it should have been supported, but coding standards can help prevent errors by making wrong code look wrong
Post a Comment