Michael Feathers recently wrote on the tech-debt yahoo group:
I think there's a couple of different ways that we can approach all of this. One is to see technical debt as a metaphor, a somewhat fuzzy way of articulating a problem, and hope that it is a powerful enough metaphor to change people's behavior.
And I replied:
That is where I am at; every time someone says "no, it's not debt, it is (fill in slightly better metaphor here)" I recognize they have a point, but I believe tech debt is compelling because it is a metaphor sales guys and VPs and accountants understand.
It's not the "best" metaphor, but it is an extremely simple and straightforward one.
As for metrics, I have strong opinions, and I keep meaning to make a blog post on this that I can turn into a position paper, but it's not happening, so here is my ultra-fast light and quick version:
Tech Debt is simply shortcuts in things that are not measured.
Let me say that again: Another way to think of tech debt is as shortcuts of things that are not measured.
Most orgaizations are capable of measuring the iron triangle: Time, Features, Money (staff). So, when asked for the impossible, technical folks short the thing that is not measured: Quality.
Thanks to Agile methods and bug tracking systems, we have increased the visibility of quality. It's more obvious and harder to short. So to cut corners, we short something else: Maybe it's documentation, maybe it's tests, maybe it's skipping refactoring to get the #$$#%&& thing out the door. It doesn't matter, we skip what isn't measured.
Another term for this is a perverse incentive, or "be careful what you measure, because you are going to get it."
In my experience, an attempt to measure and evaluate tech debt (CrapForJ, Cyclometric complexity) will result in game-playing behavior that will destroy the intent; I even have have a somewhat popular article on the topic.
So far, balanced scorecard is probably the best approach to metrics I know of - where you have a number of metrics that, hopefully, counter-balance each other.
Still, I believe that either (A) they can be easily gamed, (B) you'll spend a lot of energy gathering the metrics, or (C) the metrics will indicate that you (management) don't trust the people you are measuring.
So I recommend three alternatives. First of all, tech debt metrics can be gathered by technical people, used solely for their own benefit, and not digested or evaluated by management. Second, use the metrics to learn what is actually going on - to improve our mental model of the process - instead of to evaluate. Third, use them occasionally, instead of weekly.
So, overall I have little confidence in tech debt metrics, but I am willing to listen. I would be extremely interested in empirical research or experiments, and have an idea or two about funding if someone wants to try.
What do you think?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com