I got to use this term twice in the past couple of weeks, so I'm going to call it a real thing now. Here goes:
Heusser's First Law of Software Engineering
A conceptual model of a bunch of guesses multiplied and divided by each other is generally worth about the same as the web page it is printed on.
Of course, you're probably going to ask for an example .
UPDATE: I don't mean to be too critical. Asking your customers to evaluate and rank the importance of the software before you build it - to set a vision - to enable people to make tradeoffs that align with your vision - is a good thing. It's when you try to take these better-than-nothing guesses and make them feel like science - feel like proof.
I am especially leery when people drag out the summation symbol (it's a big capital E) from 1 to n of f(n) divided by n, using impressive looking graphics.
I look at the symbol and think "Hey, dude, why not just say the average of?" - especially when the text doesn't even bother to say "This symbol is the average of the values."
When I reach the point, I begin to suspect that the authors are preying on the math-illiterate.
Hence, Heusser's first law of software engineering.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Friday, August 29, 2008
Thursday, August 28, 2008
Bruce Lee!
Someone on the Software Testing discussion list just posted a link to quotes from Bruce Lee, the famous American-Born Martial Artist and Actor.
As your read the quotes, it's pretty clear that Lee strove toward self-mastery. Now, I am not into eastern philosophy, and I am certainly no martial artist, but I *am* into excellence - hey, I founded the conference on it - and when I read Lee's philosophy, I am struck by how similar it is to my approach to testing and development.
Here's the quote page - I hope you enjoy it.
As your read the quotes, it's pretty clear that Lee strove toward self-mastery. Now, I am not into eastern philosophy, and I am certainly no martial artist, but I *am* into excellence - hey, I founded the conference on it - and when I read Lee's philosophy, I am struck by how similar it is to my approach to testing and development.
Here's the quote page - I hope you enjoy it.
Tuesday, August 26, 2008
Tech Debt - The IT Manager's Dillema
Chris Sterling has a Thoughtful Blog Post about the IT Manager's (nearly inevitable) decision to take on tech debt in order to hit a project date.
Here's the comment I left on his bog in reply:
It depends. I agree, in general, that IT managers are behaving in ways that seem rational for the system that are participating in - if by rational we mean that they are figuring out the things that get them rewarded and doing those things.
That is moral relativism; you can use the same logic to excuse the American Slave-Holding Southerner in 1860.
So while I may not like the behavior, and I might not even think it's right, I admit that it's nearly impossible for an IT line manager to rise above it.
I don't look to IT managers to solve this problem. It is not the IT manager that actually /does/ the shortcut of "bad" technical debt; he simply exhorts and begs and pleads the tech staff to go faster.
It is the tech staff that makes the hacks, and thus the tech staff that needs to change behavior.
How do they do that? It's pretty simple(1). Give estimates that are /responsible/. When asked to compress, negotiate scope, not features. Constantly improve our craft. Periodically reflect on the work we are doing. Mentor others and seek mentors. Most importantly, never, ever make the same moral mistake of the Nazi Prison Guard when taking on tech debt: "I was just following orders."
I'm not saying put you company out of business because you need to take a "principled stand", I'm saying technical folks need to take responsibility for our tech debt and not blame management(2).
Now, I don't want to eliminate all tech debt. I don't even think that is possible. But if we can reduce it by a sizable fraction - say cut the average (bad) tech debt of a shop in half - we will significant increase the velocity of software development, thus increasing the financial stability of our companies, and our own sense of health and well-being.
The void of bad code is a pretty big hole - an empty bucket. If what we do can be a sizable splash into that bucket, well, I would be pleased.
Regards,
--matt heusser
Notes:
(1) - I said simple, not easy. Personally, I am interested in mental constructs designed to make the personal choice t "do the right thing" less painful and more rewarding. I shared one in the workshop: Limit moral hazard in the workspace by getting the dev closer to the customer.
(2) - If you look carefully at my comments during "the weaker brother" at the tech debt workshop, that is one thing I consistently did - took personal responsibility for my tech debt choices, instead of blaming the management bull-whip. We need more of it.
Here's the comment I left on his bog in reply:
It depends. I agree, in general, that IT managers are behaving in ways that seem rational for the system that are participating in - if by rational we mean that they are figuring out the things that get them rewarded and doing those things.
That is moral relativism; you can use the same logic to excuse the American Slave-Holding Southerner in 1860.
So while I may not like the behavior, and I might not even think it's right, I admit that it's nearly impossible for an IT line manager to rise above it.
I don't look to IT managers to solve this problem. It is not the IT manager that actually /does/ the shortcut of "bad" technical debt; he simply exhorts and begs and pleads the tech staff to go faster.
It is the tech staff that makes the hacks, and thus the tech staff that needs to change behavior.
How do they do that? It's pretty simple(1). Give estimates that are /responsible/. When asked to compress, negotiate scope, not features. Constantly improve our craft. Periodically reflect on the work we are doing. Mentor others and seek mentors. Most importantly, never, ever make the same moral mistake of the Nazi Prison Guard when taking on tech debt: "I was just following orders."
I'm not saying put you company out of business because you need to take a "principled stand", I'm saying technical folks need to take responsibility for our tech debt and not blame management(2).
Now, I don't want to eliminate all tech debt. I don't even think that is possible. But if we can reduce it by a sizable fraction - say cut the average (bad) tech debt of a shop in half - we will significant increase the velocity of software development, thus increasing the financial stability of our companies, and our own sense of health and well-being.
The void of bad code is a pretty big hole - an empty bucket. If what we do can be a sizable splash into that bucket, well, I would be pleased.
Regards,
--matt heusser
Notes:
(1) - I said simple, not easy. Personally, I am interested in mental constructs designed to make the personal choice t "do the right thing" less painful and more rewarding. I shared one in the workshop: Limit moral hazard in the workspace by getting the dev closer to the customer.
(2) - If you look carefully at my comments during "the weaker brother" at the tech debt workshop, that is one thing I consistently did - took personal responsibility for my tech debt choices, instead of blaming the management bull-whip. We need more of it.
Thursday, August 21, 2008
Post-Graduate Program in Software Testing?
Up the road from my house is a cosmetology school. These people study for six months in order to cut hair(*).
For a long, long time I have considered the benefits of some form of intense school for software testing, but it has ... problema.
First, I would want to attract a certain level of talent, so we would strongly prefer a bachelor's degree from students. But the talented people with four year degrees, who have a least a CS minor, have very little incentive to enroll -- after all, they can just go get a job somewhere.
For the most part, the schools expenses can be funded by recruiters fees for it's graduates; but how do the students cover living expenses for six months? If you want them to be able to get student loans, the school needs to be accredited, and you really don't get accredited for an MS degree. So you'd have to be affiliated with a university or forgo accreditation. (If it's a university, then the instructors need PhD degrees. And the intersection of qualified PhD-professor and world-class-tester is pretty much Paul Jorgensen and Cem Kaner.)
I would lean toward forgoing accreditation. While a four-year degree should be a strong plus, I don't think it would be a requirement. The whole idea is to lean toward the vocational/technical school model. Then again, hairdressers they have a state-based licensing program, which creates a training monopoly.
Anyway, it's an interesting intellectual exercise, even if nothing comes of it.
Then Mike Kelly emails me: It's being done in India right now. And I know a company that teaches testing to Russian immigrants in Silicon Valley.
But both of those are essentially a trade: The school promises to make it's students look attractive to Western Businesses. These businesses will pay an outlandish salary rate compared to what you could make back home in India or Russia, and the training seems to be relatively quick and technical, EG "How to use a specific tool.". Out of those seminal schools, could a responsible, integrity-filled US or Western-Europe based business model emerge?
I honestly don't know.
Regards,
--matt
(*) - ISTQB requires about 24 hours of in-class work to certify a tester. A typical school for hairdressers requires about 1,600. Comparing and contrasting that to any popular software test certification is an exercise for the reader
For a long, long time I have considered the benefits of some form of intense school for software testing, but it has ... problema.
First, I would want to attract a certain level of talent, so we would strongly prefer a bachelor's degree from students. But the talented people with four year degrees, who have a least a CS minor, have very little incentive to enroll -- after all, they can just go get a job somewhere.
For the most part, the schools expenses can be funded by recruiters fees for it's graduates; but how do the students cover living expenses for six months? If you want them to be able to get student loans, the school needs to be accredited, and you really don't get accredited for an MS degree. So you'd have to be affiliated with a university or forgo accreditation. (If it's a university, then the instructors need PhD degrees. And the intersection of qualified PhD-professor and world-class-tester is pretty much Paul Jorgensen and Cem Kaner.)
I would lean toward forgoing accreditation. While a four-year degree should be a strong plus, I don't think it would be a requirement. The whole idea is to lean toward the vocational/technical school model. Then again, hairdressers they have a state-based licensing program, which creates a training monopoly.
Anyway, it's an interesting intellectual exercise, even if nothing comes of it.
Then Mike Kelly emails me: It's being done in India right now. And I know a company that teaches testing to Russian immigrants in Silicon Valley.
But both of those are essentially a trade: The school promises to make it's students look attractive to Western Businesses. These businesses will pay an outlandish salary rate compared to what you could make back home in India or Russia, and the training seems to be relatively quick and technical, EG "How to use a specific tool.". Out of those seminal schools, could a responsible, integrity-filled US or Western-Europe based business model emerge?
I honestly don't know.
Regards,
--matt
(*) - ISTQB requires about 24 hours of in-class work to certify a tester. A typical school for hairdressers requires about 1,600. Comparing and contrasting that to any popular software test certification is an exercise for the reader
Wednesday, August 20, 2008
Matt's cool developer tool
At the Tech Debt workshop last week, Ron and Chet reiterated Kent Becks four rules of good software:
1) Runs all tests - and they all pass - with every check-in
2) Contains no duplication
3) Expresses all business intents
4) Minimum Amount of Code
These rules go down in an order of importance. For example, if you can have less code by losing intent - don't do it.
Getting support for number one automatically is pretty common, and as simple as hooking up a continuous integration server.
So how can we do the rest?
To explain how to get number two, I need to talk about how Compression algorthmns work. The simplest compression alogorithmn works like this:
1) Find all the duplicate text (for example, the overuse of the term "inevitable" in a bad novel)
2) Replace that duplicate code with a symbol that does not otherwise appear in the text, such as "|$"
3) Create a "header" int he file that lists all of the symbols, followed by a unique terminator
My idea is simple:
Leverage an object-oriented compression algorithm to identify all the duplication in the code. Use some kind of threashold for the duplication - such as multiple lines - or else every time the same variable is referenced it will show up as a duplicate.
Create a results file that lists all of the duplicates, and what line of code they appear on.
The developer uses this to eliminate duplication in the code.
Step two is to integrate it into an ide to make refactorings (like a good diff tool that also lets you select 'fixes') - or have some settings that do the refactorings for you. The problem with second option is that the tool would have to be language-aware, but it wouldn't be too hard to do this for java or .net languages.
There you have it. I'm off to find a good OO compression parser in perl! :-)
UPDATE: I looked into these tools about a year ago, and all I found was tools for a specific language, not generic text analyzers. Simian seems to be able to work on any ASCII text file; I guess I am a day late and a dollar short.
Oh well. It would still make a neat open-source project as part of a portfolio.
1) Runs all tests - and they all pass - with every check-in
2) Contains no duplication
3) Expresses all business intents
4) Minimum Amount of Code
These rules go down in an order of importance. For example, if you can have less code by losing intent - don't do it.
Getting support for number one automatically is pretty common, and as simple as hooking up a continuous integration server.
So how can we do the rest?
To explain how to get number two, I need to talk about how Compression algorthmns work. The simplest compression alogorithmn works like this:
1) Find all the duplicate text (for example, the overuse of the term "inevitable" in a bad novel)
2) Replace that duplicate code with a symbol that does not otherwise appear in the text, such as "|$"
3) Create a "header" int he file that lists all of the symbols, followed by a unique terminator
My idea is simple:
Leverage an object-oriented compression algorithm to identify all the duplication in the code. Use some kind of threashold for the duplication - such as multiple lines - or else every time the same variable is referenced it will show up as a duplicate.
Create a results file that lists all of the duplicates, and what line of code they appear on.
The developer uses this to eliminate duplication in the code.
Step two is to integrate it into an ide to make refactorings (like a good diff tool that also lets you select 'fixes') - or have some settings that do the refactorings for you. The problem with second option is that the tool would have to be language-aware, but it wouldn't be too hard to do this for java or .net languages.
There you have it. I'm off to find a good OO compression parser in perl! :-)
UPDATE: I looked into these tools about a year ago, and all I found was tools for a specific language, not generic text analyzers. Simian seems to be able to work on any ASCII text file; I guess I am a day late and a dollar short.
Oh well. It would still make a neat open-source project as part of a portfolio.
Tuesday, August 19, 2008
Tech Debt Reaction
I am still decompressing from the tech debt workshop. It was an exhausting but awesome two days.
Right now, I've got the day job to do, plus lots of video editing, thank-you notes, and follow up work. But here's a quick few things that came up:
* Ignorance I've come to separate bad code written through ignorance from bad code written when we knew better but felt pressure. The former is fixed by better hiring (and maybe salary) practices. The latter is what I am interested in when I speak of tech debt.
* Bug Fixin' One of the strong arguments of the workshop was "always, always" fix bugs as soon as you discover them. I'm not convinced this is true 100% of the time, but it's true often enough that the cavalier attitude towards bugs I often take, well, bugs me a little.
* Moral Hazard This happens when the person doing the work (dev) is separated from the end customer by too many layers. In this case, it's possible for the tech staff to be rewarded in the short term for velocity at the expense of goodness. The solution is to bring the customer into the conversation. At Socialtext, the Customer-Proxy is the Product Owner, and I believe we do a decent job of this, but I've been spending more and more time on customer-facing calls - and I think thinks is a good thing.
* Impedance Mismatch Chris McMahon started this conversation on his blog and with a lightning talk. His simple example is a log file parser. Hire an intern to do the work and he will miss key things; hire a genius senior dev who feels the work is beneath him and you'll get something over-engineered, gold-plated, that tomorrows junior dev might not be able to read. Two solutions: Increase your hiring rigor, or, more likely, scale the work to fit the ability of the person doing it by breaking the work up into much small tasks for the intern.
* Liquid Assests Tech Debt is a pretty negative metaphor; it starts out with the assumption that the dev team did a crap job and how do we fix it. But, what if the dev team didn't do a crap job - what if we built a system that was well-organized, with clearly defined objects that had specific purposes? Then we could reuse those objects, and future development could (theoretically) get faster, not slower, over time. For what it's worth, I have experienced this when trying to build test frameworks in perl (or "real" code in C++) at companies that do real domain modeling of objects; you simply instantiate the object and use it.
* Affordances. Building on top of the liquid assets idea is one where the team identifies it's repeat activity and builds technologies to eliminate repetition - such as vi extensions or maybe wikirad. One teach said they spent every Friday working on test infrastructure in order to go faster. I don't know about limiting it to just test infrastructure; I wonder if any kind of dev automation might be a good investment.
Brian Marick also has some follow-ups here and here.
The other interesting thing about the workshop is the financial breakdown. You may recall that this was a zero-profit conference, funded by the Agile Alliance, the Software Division of the American Society for Quality, and the Association for Software Testing. The bottom line is that after food, video cameras, supplies, and helping to cover the expenses of a couple of people who came in from out of town, we still have money left over. (We didn't even have to ask AST for any funds - yet - but we will pull that cord if we need help getting the video editing done.)
Which means I have to send an email to someone at the Agile Alliance and ask them where to send a check for $120.00.
I wonder how often they get an email like that?
Right now, I've got the day job to do, plus lots of video editing, thank-you notes, and follow up work. But here's a quick few things that came up:
* Ignorance I've come to separate bad code written through ignorance from bad code written when we knew better but felt pressure. The former is fixed by better hiring (and maybe salary) practices. The latter is what I am interested in when I speak of tech debt.
* Bug Fixin' One of the strong arguments of the workshop was "always, always" fix bugs as soon as you discover them. I'm not convinced this is true 100% of the time, but it's true often enough that the cavalier attitude towards bugs I often take, well, bugs me a little.
* Moral Hazard This happens when the person doing the work (dev) is separated from the end customer by too many layers. In this case, it's possible for the tech staff to be rewarded in the short term for velocity at the expense of goodness. The solution is to bring the customer into the conversation. At Socialtext, the Customer-Proxy is the Product Owner, and I believe we do a decent job of this, but I've been spending more and more time on customer-facing calls - and I think thinks is a good thing.
* Impedance Mismatch Chris McMahon started this conversation on his blog and with a lightning talk. His simple example is a log file parser. Hire an intern to do the work and he will miss key things; hire a genius senior dev who feels the work is beneath him and you'll get something over-engineered, gold-plated, that tomorrows junior dev might not be able to read. Two solutions: Increase your hiring rigor, or, more likely, scale the work to fit the ability of the person doing it by breaking the work up into much small tasks for the intern.
* Liquid Assests Tech Debt is a pretty negative metaphor; it starts out with the assumption that the dev team did a crap job and how do we fix it. But, what if the dev team didn't do a crap job - what if we built a system that was well-organized, with clearly defined objects that had specific purposes? Then we could reuse those objects, and future development could (theoretically) get faster, not slower, over time. For what it's worth, I have experienced this when trying to build test frameworks in perl (or "real" code in C++) at companies that do real domain modeling of objects; you simply instantiate the object and use it.
* Affordances. Building on top of the liquid assets idea is one where the team identifies it's repeat activity and builds technologies to eliminate repetition - such as vi extensions or maybe wikirad. One teach said they spent every Friday working on test infrastructure in order to go faster. I don't know about limiting it to just test infrastructure; I wonder if any kind of dev automation might be a good investment.
Brian Marick also has some follow-ups here and here.
The other interesting thing about the workshop is the financial breakdown. You may recall that this was a zero-profit conference, funded by the Agile Alliance, the Software Division of the American Society for Quality, and the Association for Software Testing. The bottom line is that after food, video cameras, supplies, and helping to cover the expenses of a couple of people who came in from out of town, we still have money left over. (We didn't even have to ask AST for any funds - yet - but we will pull that cord if we need help getting the video editing done.)
Which means I have to send an email to someone at the Agile Alliance and ask them where to send a check for $120.00.
I wonder how often they get an email like that?
Monday, August 18, 2008
Craft In Software Development
My latest post of the Test-Driven Development Yahoo Group:
--- In testdrivendevelopment@yahoogroups.com, "Casey Charlton" wrote:
However, I assert the problem is that development is largely a creative skill, not a technical one. And creative skills are nearly impossible to quantify - you know when you like a piece of artwork, but you cannot say why in a way that means anything to anyone else. I cannot prove I am worth my daily rate, other than by people trusting me.
I think I get what you are saying here, and I agree.
However, my undergraduate work was in mathematics, which is similar to CS in that it is extremely abstract, involves complex variables, and, to the initiated, looks like complete gibberish. :-)
Still, in math, first and foremost, we were taught Occam's Razor - that the simples solution is probably the correct one. We were taught a sense of aethetics - to the point that you could look at a proof and say "that just looks wrong" - and, nine times out of ten, it would be wrong.
Aesthetics is strong guidance in mathematics; not only could you find the error, you are likely to be able to tell exactly where the proof went off the rails by looking for when the ugliness began.
In Computer Science, we lack similar concepts. In fact, when I was in CS School, the big idea was the grand, over-arching framework that was going to solve all our problems(*).
So, while I agree with you in that it may not be possible to quantify goodness of software, I believe it is possible to condition, or to teach behavior that leads toward 'better' code. On place to start is with teaching aethetics in the undergraduate curriculum, another is to foster a sense of craftsmanship in software development.
This fall, I start teaching computer science part-time at Calvin College, and I am considering running a peer workshop in 2009 on craftsmanship (specifically, apprentice/journeyman/master) in software development.
What are you doing? :-)
Regards,
Matt Heusser
(*) - In math, you'd look at the grand framework that delivered no business value and say "That just looks wrong." :-)
--- In testdrivendevelopment@yahoogroups.com, "Casey Charlton"
However, I assert the problem is that development is largely a creative skill, not a technical one. And creative skills are nearly impossible to quantify - you know when you like a piece of artwork, but you cannot say why in a way that means anything to anyone else. I cannot prove I am worth my daily rate, other than by people trusting me.
I think I get what you are saying here, and I agree.
However, my undergraduate work was in mathematics, which is similar to CS in that it is extremely abstract, involves complex variables, and, to the initiated, looks like complete gibberish. :-)
Still, in math, first and foremost, we were taught Occam's Razor - that the simples solution is probably the correct one. We were taught a sense of aethetics - to the point that you could look at a proof and say "that just looks wrong" - and, nine times out of ten, it would be wrong.
Aesthetics is strong guidance in mathematics; not only could you find the error, you are likely to be able to tell exactly where the proof went off the rails by looking for when the ugliness began.
In Computer Science, we lack similar concepts. In fact, when I was in CS School, the big idea was the grand, over-arching framework that was going to solve all our problems(*).
So, while I agree with you in that it may not be possible to quantify goodness of software, I believe it is possible to condition, or to teach behavior that leads toward 'better' code. On place to start is with teaching aethetics in the undergraduate curriculum, another is to foster a sense of craftsmanship in software development.
This fall, I start teaching computer science part-time at Calvin College, and I am considering running a peer workshop in 2009 on craftsmanship (specifically, apprentice/journeyman/master) in software development.
What are you doing? :-)
Regards,
Matt Heusser
(*) - In math, you'd look at the grand framework that delivered no business value and say "That just looks wrong." :-)
Wednesday, August 13, 2008
We're in the running ...
DISCLAIMER: I try not to put commercial stuff up here. Yes, I work for a company that makes a wiki product. Yes, there is a trial version that is free for up to five users. Most readers know that, and I'm not going to hit you over the head with it.
When I was at Priority Health, I was a software engineering, and we had a lot of exception processes - so I had the code email me on failure and success. As a result, I got a lot of email. Socialtext has defined a use of a wiki to eliminate email bloat, and it's up for nomination as a FastCompany magazine "Bright Idea." I think it's a good idea, and I'm asking you to vote for it. You can register and vote here.
Again, I try very hard to moderate my posts - I'm not in sales. Every once in a while, someone will say "Matt, I have no budget and I'm not buying anything. What can I do to help?" Well, please, register for the fastcompany challenge thing, vote, and, if you are really motivated, email a friend.
Oh - and thanks.
When I was at Priority Health, I was a software engineering, and we had a lot of exception processes - so I had the code email me on failure and success. As a result, I got a lot of email. Socialtext has defined a use of a wiki to eliminate email bloat, and it's up for nomination as a FastCompany magazine "Bright Idea." I think it's a good idea, and I'm asking you to vote for it. You can register and vote here.
Again, I try very hard to moderate my posts - I'm not in sales. Every once in a while, someone will say "Matt, I have no budget and I'm not buying anything. What can I do to help?" Well, please, register for the fastcompany challenge thing, vote, and, if you are really motivated, email a friend.
Oh - and thanks.
Tuesday, August 12, 2008
Tech Debt on my brain
Ken Schwaber, co-creator of Scrum and author of several Scrum books, has an interesting talk "Canary in the Coal Mine" about the impact of a degrading codebase on project velocity. (Hint: Everything takes more time)
The Tech Debt peer workshop starts tomorrow. Yay!!!
The Tech Debt peer workshop starts tomorrow. Yay!!!
Monday, August 11, 2008
Redundancy?
There's been an interesting little discussion on the Test-Driven-Development list about redundant tests that goes something like this:
A) I have a unit test called "UpdatesDatabase" in my database-connector object that tests to make sure I can update the database.
B) I have the same test in my "Model"; all the model does is call the connector object, but I have a test for it.
C) I have the same test in my "Controller"
D) I have the same test in my GUI/View
E) My customer does the same thing as an acceptance test
It's not one test, it's dozens of tests in each layer, each repeated five times. Isn't this redundant?
My short answer is both - yes, it's redundant, and, at the same time, that is not necessarily bad.
In any large, working system, at any one time, at least one system is failing, and another system is compensating(*).
If this was not true, we would not need tests, right?
So, first off, if your automated tests get to the point where they could be automatically generated by a code-generator, you aren't thinking, and risk spending a lot of time on things that might not have much value. If you've got more than two copies of essentially the same test, you may be able to eliminate some of those tests by making a pointed decision about risk.
At the same time, If you get feedback like "It just HAS TO WORK" from management, well, recognize that systems fail, and the way to prevent failure is through redundancy and failover. One way to do that is through "redundant" tests at multiple levels; another is, yes, an independent test group.
regards,
UPDATE: Yes, it's a complex architecture, probably win32, not web, and it could certainly be a heckofalot tighter. I suggest we keep that as a separate discussion.
--matt
(*) -John Gall discusses this in "Systemantics" , if you want the Cliff's notes you can download an MP3 of Peter Coffee discussing this at Agile 2006.
A) I have a unit test called "UpdatesDatabase" in my database-connector object that tests to make sure I can update the database.
B) I have the same test in my "Model"; all the model does is call the connector object, but I have a test for it.
C) I have the same test in my "Controller"
D) I have the same test in my GUI/View
E) My customer does the same thing as an acceptance test
It's not one test, it's dozens of tests in each layer, each repeated five times. Isn't this redundant?
My short answer is both - yes, it's redundant, and, at the same time, that is not necessarily bad.
In any large, working system, at any one time, at least one system is failing, and another system is compensating(*).
If this was not true, we would not need tests, right?
So, first off, if your automated tests get to the point where they could be automatically generated by a code-generator, you aren't thinking, and risk spending a lot of time on things that might not have much value. If you've got more than two copies of essentially the same test, you may be able to eliminate some of those tests by making a pointed decision about risk.
At the same time, If you get feedback like "It just HAS TO WORK" from management, well, recognize that systems fail, and the way to prevent failure is through redundancy and failover. One way to do that is through "redundant" tests at multiple levels; another is, yes, an independent test group.
regards,
UPDATE: Yes, it's a complex architecture, probably win32, not web, and it could certainly be a heckofalot tighter. I suggest we keep that as a separate discussion.
--matt
(*) -John Gall discusses this in "Systemantics" , if you want the Cliff's notes you can download an MP3 of Peter Coffee discussing this at Agile 2006.
Thursday, August 07, 2008
The Unit Test Unit
Remember the bru-ha-ha when I asked to define unit test? Well, it's published. :-)
Chris McMahon and I co-wrote "The Unit Test Unit"(*), which updates Meyer's 1979 definition for today and includes some more current terminology; here's a link.
Of course, your comments are welcome. Feel free to use our definitions if you think they are helpful.
--matt
(*) - Yes, my editor picked the title. It grows on you, really.
Chris McMahon and I co-wrote "The Unit Test Unit"(*), which updates Meyer's 1979 definition for today and includes some more current terminology; here's a link.
Of course, your comments are welcome. Feel free to use our definitions if you think they are helpful.
--matt
(*) - Yes, my editor picked the title. It grows on you, really.
Friday, August 01, 2008
Testing Service Oriented Architectures
I sent an article off to CIO last week on Testing Service Oriented Architectures.
They printed it.
I guess that means they liked it?
They printed it.
I guess that means they liked it?
Testing is overrated
Words are funny things. Take for example, the word 'testing.' It seems at first that word is pretty obvious - checking the software to find out if it works or not, right?
Yet in certain parts of the Agile community, you see a very different definition:
- It's not a test unless it is automated
- It's not a test unless it is a unit test, written by a developer
The underlying attitude seems to be We don't need no stinkin' QA people.
And, sadly, I have to admit, I understand where these guys are coming from. There are various schools of software testing, and, as I have pointed out in the past - if you are a strong developer who's only exposure it so the oblivious school of software testing, you might very well feel that you don't need to stinkin' QA people. You might even be justified in it, for your particular shop.
Still, to limit testing to developer-facing is, well, myopic. You leave a lot of interesting failure possibilities out. In other words, like eating only a bowl of sugar-smacks for breakfast - it is unabalanced.
I've been thinking of writing a strong piece on this for a couple of years now that expresses my whole strategy. It looks like Luke Franci beat me to it; check out his essay on the subject here(*).
Don't get me wrong. I think TDD is wonderful, and that having developers write tests is a great way to increase the quality of the code at every instant in time. Also, testers can plow through much more functionality when it basically works in the first place - and TDD is a great way to get this.
No, testing isn't over-rated, but developer-testing alone -- might be.
UPDATE: At the top, I mention the magic word re-definition of test. You can see that in the Luke Franci article. If you read really carefully, what he says is, essentially "Testing isn't enough; we also need to do testing." What he means is: "(Developer Facing, automated, code-driving) Testing isn't enough; we also need to do (non-developer facing, traditional) testing."
I am really not a fan of using the word testing to mean a small sub-set of it.
Yet in certain parts of the Agile community, you see a very different definition:
- It's not a test unless it is automated
- It's not a test unless it is a unit test, written by a developer
The underlying attitude seems to be We don't need no stinkin' QA people.
And, sadly, I have to admit, I understand where these guys are coming from. There are various schools of software testing, and, as I have pointed out in the past - if you are a strong developer who's only exposure it so the oblivious school of software testing, you might very well feel that you don't need to stinkin' QA people. You might even be justified in it, for your particular shop.
Still, to limit testing to developer-facing is, well, myopic. You leave a lot of interesting failure possibilities out. In other words, like eating only a bowl of sugar-smacks for breakfast - it is unabalanced.
I've been thinking of writing a strong piece on this for a couple of years now that expresses my whole strategy. It looks like Luke Franci beat me to it; check out his essay on the subject here(*).
Don't get me wrong. I think TDD is wonderful, and that having developers write tests is a great way to increase the quality of the code at every instant in time. Also, testers can plow through much more functionality when it basically works in the first place - and TDD is a great way to get this.
No, testing isn't over-rated, but developer-testing alone -- might be.
UPDATE: At the top, I mention the magic word re-definition of test. You can see that in the Luke Franci article. If you read really carefully, what he says is, essentially "Testing isn't enough; we also need to do testing." What he means is: "(Developer Facing, automated, code-driving) Testing isn't enough; we also need to do (non-developer facing, traditional) testing."
I am really not a fan of using the word testing to mean a small sub-set of it.
Subscribe to:
Posts (Atom)