Four years ago -- well before crisis of fall 2008 or the housing meltdown, I wrote a little piece called Against Systems that got a little bit of press attention.
The bottom line was that if you design a point-based system, it'll likely have flaws, and if you allow the system to enforce the rules (say, make it a computer program, or an algorithm) human beings will tend to exploit those flaws.
Wouldn't you know it but this month's Inc. Magazine has an article, "Rewriting the rules of credit", where Economis Amar Bhide answers the following question:
Lending used to be a subject matter: Why did we wind up with system of stringent rules?
With this answer:
First, there was an ethos that developed in academia that said that all risks can be quantified. What economists did was say the tuff that we cannot quantify is really on the margin. And what's essential to riks, we can present to reduce to one or two nubers. Once you do that, then you can create a machine. If you're required to think of risk in a broad, holistic kind of way, it's much more time-consuming.
Implicitly and explicitly, the government embraced this view of risk. Almost unwittingly [Fannie Mae and Freddie Mac] created the largest mechanist model of lending in the world simply by saying we will underwrite the risk of mortgages if they meet XYZ criteria. If you followed the model for a loan, the government would take it.
The interesting part is that not all lending can be equally mechanized and scaled up. And therin lies the rub. It means if I'm a bank, and I want to expand, I'm going to favor the activity where I can put the pedal to the metal fastest.
Further:
And small-business lending does not fit that model?
Correct. It was and remains an activity that requires a banker to go and talk to the borrower. Analysts can pretend that all housing loans are the same, but with small business, the pretending completely defies belief. So small business gets the short end of the stick.
Now think about that 'pretending' that all loans are the same: It meant not human being was looking at the whole balance sheet for holes. Often, it meant that no human was physically examining pay stubs.
We all know how that works out.
So what happens when we rely on an impersonal, mechanistic process to look at our risk - both for our process and for our product?
I hope you can see where I'm going with this.
It's a great interview, and should be up on Inc. com in a few days; I am a subscriber and get the "content" early.
I'll link to it when it becomes available.
In the mean time, I hope you'll join me in embracing that 'broad, holistic' view of risk that Amar is talking about.
Otherwise, we're not testers, but just check-ers. Ya know?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Thursday, January 20, 2011
Monday, January 17, 2011
2011 Quality Goals
Things have been a little quiet at Creative Chaos lately -- most of my blogging has been over at the Software Test Professionals Community Blog. That said, every now and again I get inspired to do something more personal, outside of my STP work, and I am trying to keep up this blog.
One of those things is participating in the ASQ Influential voices program.
So check it out - this month, in a little 21-second video clip, the CEO of ASQ asked us what our Quality Goals are for 2011.
Which reminds me of an interesting question: Just what is a 'quality goal', anyway?
Are they personal goals? Career goals? Goals for my department, or company? Perhaps they are goals for the entire quality profession, or society?
Suddenly a simple question turns out to be not so simple to answer.
I do, however, have a few answers:
Personal Goals
I currently weigh two hundred and one pounds. That is about thirty pounds over a healthy weight, and fifty pounds over my ideal weight. At the same time, muscle weighs more than fat, so it is possible that I get skinnier but do not lose weight.
So here's my goal: I would like to exercise at least three times a week, eat smaller portions, drink water at restaurants, skip dessert, and go out to eat less often. My goal by the end of the year is less, to be at least fifteen pounds down, but more importantly, it's to be living a healthy, sustainable lifestyle.
Also, as much as I enjoy software development and testing, I want to keep coaching soccer, teaching religious education, and maintain my relationship with my family. If I improve my career, publish three books, and make much more money, but hurt my relationship with my family, I have failed.
Career Goals
First and foremost, I want to keep my gig at Socialtext. If I have to abandon my professional development activities to keep that gig, I will and should. I do want to do professional development stuff, it just can't get in the way of my family, my health, or my gig at Socialtext. In other words, my professional development is subordinate to the other goals.
Now that i've got that out of the way: I have a fair bit going on. The first order of the day is maintaining the balls in the air: I'd like to continue blogging and podcasting for STP, I intend to finish up the year of "influential voices" with ASQ, continue my interview column in STQA Magazine, and yes, finish up the editing job for the How To Reduce The Cost Of Testing anthology. Oh, I'll be speaking at the Conference for the Association for the Software Testing as well.
Whew. That's a lot of running to stay in place.
Now think about these projects -- except for the conference and the book, the are generally recurring. That is, they generate drag on my life that never ends.
With the time left, I hope to start a second anthology project when the first finishes, speak at a second conference, and perhaps write a half-dozen additional articles or so in the year to come. (Time will tell. I'd like to do a dozen, but I fear that isn't realistic.)
Most important, I would like to grow and deepen some relationships -- and an exhaustive writing schedule tends to weaken relationships, not enhance them. hmm. Ok. So if I didn't end up writing anything outside my existing commitments in 2011, but got to know people in a real and meaningful way, that would be better. (Who do I want to get to know? Mostly folks in the Rebel Alliance, the Writing about Testing list, and the Miagi-Do school of software testing.)
Beyond these projects, I'm looking to improve my time management. At this point, a 5% improvement would probably be a win; I'm not sure how far I can continue to stretch things.
Company Goals
The company I work at for my day job, Socialtext has numerical targets for projects, sales, and staffing, that we discussed last week in Palo Alto at our company face-to-face. I may be able to share some of that here in the future, but I hope you understand if I'm a little reluctant to discuss that without checking up first.
Suffice to say, we have tough but realistic goals that are explicit.
Goals for the Quality Profession and Society
As I mentioned earlier, I think the "How to Reduce the Cost of Testing" book has potential, as would a follow-up collection of essays. My general goal here is to continue to champion a human-centered view of software testing as risk management.
What would would be some leading indicators that this is working?
First off, we could see an increase in membership on the rebel alliance list, and most jobs filled by those sort of informal methods. I would like to see more short-term contract engagements for Boutique Testing flying around, and, if possible, I get pleasure in connecting gigs to seekers. Likewise, I'd like to see Miagi-Do prosper, but I don't know how much time I will have to dedicate to it.
And, of course, I hope the attendance numbers for CAST and STPCon go up in the year to come. Personally, I would like to see 30% growth; I think 20% is reasonable.
Among those conferences, though, I would really like to see more people from Microsoft, Yahoo, and Google. At the same time, I'd like to see a growing realization among management that the solutions those big companies have don't always fit the problems that smaller companies have. So discernment within the quality professional is increasingly important to me.
How you measure that, I'm not sure. But I'll know it when I see it.
Whew. That's a fair bit of goals. Enough for now, I think.
Meta
Having gone through this exercise, which is a good one and I recommend, I can't help but notice a few things. First, that not all of my goals can be numerically quantified - for example, if I lost weight in an unsustainable way and bounce back, it would undo the reason I am trying to accomplish the goal. Second, these are prioritized. Family comes first; accomplishing all the others and failing at family is #fail.
Third, most of these are expressed as comittments, not goals. My actual, personal goals are much higher, and I am not expressing them in writing. Personally, I find that expressing goals in writing often generates the same sort of "brain candy" as actually accomplishing to goal - so I am holding back. A fair bit of behavioral science supports this premise.
Finally, notice what I am really doing here - a bunch of stuff that I find personally enjoyable and quick return, instead of "building" toward one specific thing.
My personal development turns out to be exploratory in nature, and it seems to be working for me.
Speaking of which, all of these goals are the best idea I have right now. If something does come along that is that much better, well, I feel comfortable failing to "accomplish" my goals in order to do something better.
Except for the stuff with the legal contract with my signature on it. That, I'm going to do, and do well, because inside of all of these goals is a hidden one: I want to be a certain kind of person -- and that includes keeping my promises.
Which reminds me -- I might have a 'nice to have' section of my goals that are things that I am not promising, but would like to do, like re-writing the Excelon Development website. hmm.
I have some thinking to do. More to come.
One of those things is participating in the ASQ Influential voices program.
So check it out - this month, in a little 21-second video clip, the CEO of ASQ asked us what our Quality Goals are for 2011.
Which reminds me of an interesting question: Just what is a 'quality goal', anyway?
Are they personal goals? Career goals? Goals for my department, or company? Perhaps they are goals for the entire quality profession, or society?
Suddenly a simple question turns out to be not so simple to answer.
I do, however, have a few answers:
Personal Goals
I currently weigh two hundred and one pounds. That is about thirty pounds over a healthy weight, and fifty pounds over my ideal weight. At the same time, muscle weighs more than fat, so it is possible that I get skinnier but do not lose weight.
So here's my goal: I would like to exercise at least three times a week, eat smaller portions, drink water at restaurants, skip dessert, and go out to eat less often. My goal by the end of the year is less, to be at least fifteen pounds down, but more importantly, it's to be living a healthy, sustainable lifestyle.
Also, as much as I enjoy software development and testing, I want to keep coaching soccer, teaching religious education, and maintain my relationship with my family. If I improve my career, publish three books, and make much more money, but hurt my relationship with my family, I have failed.
Career Goals
First and foremost, I want to keep my gig at Socialtext. If I have to abandon my professional development activities to keep that gig, I will and should. I do want to do professional development stuff, it just can't get in the way of my family, my health, or my gig at Socialtext. In other words, my professional development is subordinate to the other goals.
Now that i've got that out of the way: I have a fair bit going on. The first order of the day is maintaining the balls in the air: I'd like to continue blogging and podcasting for STP, I intend to finish up the year of "influential voices" with ASQ, continue my interview column in STQA Magazine, and yes, finish up the editing job for the How To Reduce The Cost Of Testing anthology. Oh, I'll be speaking at the Conference for the Association for the Software Testing as well.
Whew. That's a lot of running to stay in place.
Now think about these projects -- except for the conference and the book, the are generally recurring. That is, they generate drag on my life that never ends.
With the time left, I hope to start a second anthology project when the first finishes, speak at a second conference, and perhaps write a half-dozen additional articles or so in the year to come. (Time will tell. I'd like to do a dozen, but I fear that isn't realistic.)
Most important, I would like to grow and deepen some relationships -- and an exhaustive writing schedule tends to weaken relationships, not enhance them. hmm. Ok. So if I didn't end up writing anything outside my existing commitments in 2011, but got to know people in a real and meaningful way, that would be better. (Who do I want to get to know? Mostly folks in the Rebel Alliance, the Writing about Testing list, and the Miagi-Do school of software testing.)
Beyond these projects, I'm looking to improve my time management. At this point, a 5% improvement would probably be a win; I'm not sure how far I can continue to stretch things.
Company Goals
The company I work at for my day job, Socialtext has numerical targets for projects, sales, and staffing, that we discussed last week in Palo Alto at our company face-to-face. I may be able to share some of that here in the future, but I hope you understand if I'm a little reluctant to discuss that without checking up first.
Suffice to say, we have tough but realistic goals that are explicit.
Goals for the Quality Profession and Society
As I mentioned earlier, I think the "How to Reduce the Cost of Testing" book has potential, as would a follow-up collection of essays. My general goal here is to continue to champion a human-centered view of software testing as risk management.
What would would be some leading indicators that this is working?
First off, we could see an increase in membership on the rebel alliance list, and most jobs filled by those sort of informal methods. I would like to see more short-term contract engagements for Boutique Testing flying around, and, if possible, I get pleasure in connecting gigs to seekers. Likewise, I'd like to see Miagi-Do prosper, but I don't know how much time I will have to dedicate to it.
And, of course, I hope the attendance numbers for CAST and STPCon go up in the year to come. Personally, I would like to see 30% growth; I think 20% is reasonable.
Among those conferences, though, I would really like to see more people from Microsoft, Yahoo, and Google. At the same time, I'd like to see a growing realization among management that the solutions those big companies have don't always fit the problems that smaller companies have. So discernment within the quality professional is increasingly important to me.
How you measure that, I'm not sure. But I'll know it when I see it.
Whew. That's a fair bit of goals. Enough for now, I think.
Meta
Having gone through this exercise, which is a good one and I recommend, I can't help but notice a few things. First, that not all of my goals can be numerically quantified - for example, if I lost weight in an unsustainable way and bounce back, it would undo the reason I am trying to accomplish the goal. Second, these are prioritized. Family comes first; accomplishing all the others and failing at family is #fail.
Third, most of these are expressed as comittments, not goals. My actual, personal goals are much higher, and I am not expressing them in writing. Personally, I find that expressing goals in writing often generates the same sort of "brain candy" as actually accomplishing to goal - so I am holding back. A fair bit of behavioral science supports this premise.
Finally, notice what I am really doing here - a bunch of stuff that I find personally enjoyable and quick return, instead of "building" toward one specific thing.
My personal development turns out to be exploratory in nature, and it seems to be working for me.
Speaking of which, all of these goals are the best idea I have right now. If something does come along that is that much better, well, I feel comfortable failing to "accomplish" my goals in order to do something better.
Except for the stuff with the legal contract with my signature on it. That, I'm going to do, and do well, because inside of all of these goals is a hidden one: I want to be a certain kind of person -- and that includes keeping my promises.
Which reminds me -- I might have a 'nice to have' section of my goals that are things that I am not promising, but would like to do, like re-writing the Excelon Development website. hmm.
I have some thinking to do. More to come.
Thursday, December 16, 2010
Budgets, Badges, and Badgers - III
Earlier in the week I introduced the Malcolm Baldridge Quality Award, and my doubts about it -- yesterday I posted a quick summary of my opinion.
It is a serious subject and deserves a serious answer -- I do believe it is time to get specific.
What is the Baldridge award?
According to it's homepage, the mission of the Baldridge program is to "improve the competitiveness and performance of U.S. organizations." Mr. Borawski defined it by saying that:
To put this into my own words, I suppose the best, most competitive companies have ideas that can be used by other companies. So we should find those organizations, hold them up, and share their ideas. If everyone were to share these ideas, why, we would see increased productivity, which means more goods and services created, which means an improvement in quality of life in our communities and increased competitiveness abroad -- everybody wins. Sounds good to me, eh?
Except ... wait.
Exactly who is deciding what 'best' means?
In Wall Street, we have one way of deciding who's best: The wisdom of the crowd. People buy shares in companies they like, and sell stock if they do not like them. This creates 'winners' and 'losers.'
Likewise, on main street we have another kind of voting: The pocketbook. People purchase services from companies they like, and can complain about or boycott companies they don't like. If enough people stop buying your stuff, you go out of business; if lots of people buy your stuff (and you don't mess up along the way), you can become Wal*Mart. (Or at least Target, maybe?)
I call these sorts of systems "market based" because they allow people to vote with their wallet. This means a company needs to sell goods or services people want at a price they can afford -- or the company goes out of business.
The Baldridge Award replaces this measure with it's own wisdom. Now, for public service organizations (a police station) and non-profit organizations, you might have to do something like that.
But that's not the point Matt, sharing of best practices is!
The ASQ (and most other Baldridge defenders) are quick to point out that the program is not about the award, it is about sharing of best practice.
ahh, there is that word. "Best Practices." I have to tell you honestly, that term creeps me out.
Best practices is not an engineering term.
Engineers of any stripe, software or mechanical, do not talk about best practices. They talk about tradeoffs, about losing something less important to our group at this point in time in order to get something more important to our group at this point in time.
The groups and the points in time might change, so no practice is ever "best."
In fact, I belong to a group called the context-driven school of software testing that censures the term best practices.
By censure, I mean outlaw. If you use the term at one of our conferences, you'll likely be told to use another term. (More likely, your submission won't be accepted, and you'll get an email explaining why.)
When I hear the term "best practices", what comes to mind is marketing, sales, hype, and sloppy thinking.
Sure, the baldridge program might share practices - but are the cash-handling practices for a bank going to apply for a gas station? How about for a one-person small engine repair shop?
It's likely that they will not - that implementing the practices can cause more harm than good.
What is the criteria for the program?
The criteria for the business side of the Baldridge award ("performance excellence") is a seventy-seven page PDF that provides a framework for evaluating a business. The evaluation terms include things like "Measurement, Analysis, and Knowledge Management", "Workforce Focus", "Customer Focus", and "Results."
Now this is where I have to get a little personal and base my opinions on my experiences, just a little bit. You see the Baldridge program does have a relatively small budget - so to evaluate companies it relies on volunteer "examiners." Of all the aspects of the award, I probably like the volunteer/examiner program the best.
Ideally, I should become an examiner and take the training myself - but I've met enough examiners and talked to them to have some idea of what the program entails. Suffice to say that the program evaluates companies according to a value system - to see if the companies work is stable, predictable, and measured enough that it can experiment with a change and known numerically if the results are sufficient.
The examples I have seen were around hospital wait time, and time-to-execute on certain recurring operations, like perhaps a blood draw. Yet there are significant challenges with measuring knowledge work, which is an increasingly large part of the American economy. Even if the work is repeatable, I might question if it is valuable -- for example, Barnes and Nobles had a great, stable, repeatable system to produce books in 1995 ... right up until Amazon.com came along with a disruptive innovation and took away their business model.
That idea of disruptive innovation being more and more dangerous to a business that becomes more and more highly specialized -- isn't mine alone -- it is a hallmark of modern risk management.
So sure, running your company "by the numbers" is one ideal of business management, but it's not the only one, and it bothers me that our government would institutionalize it. (This is probably the one area I know least about the program, but I am open to learning more, and I calls 'em like I see 'em.)
Should our government be doing this?
I may not be a constitutional scholar, but I've read the thing and I see that the government has certain powers elaborated in the constitution and those not elaborated are delegated to the states. I do realize that recently, as a nation, we have not paid a whole lot of attention to the document, especially it's intent. Further, I realize that the Federal Government is granted the right to regulate interstate commerce, and that power is used to justify several large government organizations like OSHA, and NIST, compared to which Baldridge is a drop in the bucket.
Further, our Federal government is one of the world's largest employers; I think we employ something like two million people, and that's before adding government contractors like LockHeed-Martin.
On the business side, I can't see a reason the Federal government would see spreading performance excellence "best practice" as within it's role. The appeal to 'national interest' seems to go against the historical reason we exist as a country -- we exist because we wanted the government out of our business. In addition, it seems to smack of centralized planning to me -- something the russians tried after the second world war -- to "decide" the "right way" to do things and to spread that out to every company, instead of letting the free market decide.
It didn't work out that great for the Russians.
We need less of this, not more. According to my value system, we need lless scripted behavior and more thinking -- the group we need to hold up as exemplars is most likely the liberal arts tradition.
But that's my opinion. I don't need government money to compete; give me a microphone called the internet and let people respond if they want to.
Conclusions
By now you realize that I'm not keen on the Baldridge award. Of course we should de-fund it. More than that, we should ask how a program like that ever got to be funded in the first place!
But that brings an interesting question. If Quality is what Paul Borawski calls "the set the concepts, techniques, and tools that connect good intention with realized and sustainable outcomes", how is it possible that we have such a different understanding the role and benefits of the Baldridge award? Wouldn't you hope we came to the same conclusion, not conclusions that were wildly different?
This is a contradiction and it's probably wise to check our assumptions.
The simplest explanation I can think of is that Paul and I have different values; that he believes that more centralized planning (or "sharing" if you have a lighter touch) combined with management by the numbers will lead to better outcomes for the United States, or even the world.
I've made my case against this worldview.
I would be pleased to see a strong reply; I'm interested in the discussion, or, possibly, an explanation of what I am missing - the benefits that the Baldrige programs adds that I am failing to take into account.
Either way, as a tiny little niche industry, I suspect we 'quality' people have a fair bit of work cut out for ourselves.
It's an exciting time to be a tester.
It is a serious subject and deserves a serious answer -- I do believe it is time to get specific.
What is the Baldridge award?
According to it's homepage, the mission of the Baldridge program is to "improve the competitiveness and performance of U.S. organizations." Mr. Borawski defined it by saying that:
The Baldrige Program serves to:
1) Identify and recognize role model organizations
2) Establish criteria for evaluating improvement efforts
3) Disseminate and share best practices
To put this into my own words, I suppose the best, most competitive companies have ideas that can be used by other companies. So we should find those organizations, hold them up, and share their ideas. If everyone were to share these ideas, why, we would see increased productivity, which means more goods and services created, which means an improvement in quality of life in our communities and increased competitiveness abroad -- everybody wins. Sounds good to me, eh?
Except ... wait.
Exactly who is deciding what 'best' means?
In Wall Street, we have one way of deciding who's best: The wisdom of the crowd. People buy shares in companies they like, and sell stock if they do not like them. This creates 'winners' and 'losers.'
Likewise, on main street we have another kind of voting: The pocketbook. People purchase services from companies they like, and can complain about or boycott companies they don't like. If enough people stop buying your stuff, you go out of business; if lots of people buy your stuff (and you don't mess up along the way), you can become Wal*Mart. (Or at least Target, maybe?)
I call these sorts of systems "market based" because they allow people to vote with their wallet. This means a company needs to sell goods or services people want at a price they can afford -- or the company goes out of business.
The Baldridge Award replaces this measure with it's own wisdom. Now, for public service organizations (a police station) and non-profit organizations, you might have to do something like that.
But that's not the point Matt, sharing of best practices is!
The ASQ (and most other Baldridge defenders) are quick to point out that the program is not about the award, it is about sharing of best practice.
ahh, there is that word. "Best Practices." I have to tell you honestly, that term creeps me out.
Best practices is not an engineering term.
Engineers of any stripe, software or mechanical, do not talk about best practices. They talk about tradeoffs, about losing something less important to our group at this point in time in order to get something more important to our group at this point in time.
The groups and the points in time might change, so no practice is ever "best."
In fact, I belong to a group called the context-driven school of software testing that censures the term best practices.
By censure, I mean outlaw. If you use the term at one of our conferences, you'll likely be told to use another term. (More likely, your submission won't be accepted, and you'll get an email explaining why.)
When I hear the term "best practices", what comes to mind is marketing, sales, hype, and sloppy thinking.
Sure, the baldridge program might share practices - but are the cash-handling practices for a bank going to apply for a gas station? How about for a one-person small engine repair shop?
It's likely that they will not - that implementing the practices can cause more harm than good.
What is the criteria for the program?
The criteria for the business side of the Baldridge award ("performance excellence") is a seventy-seven page PDF that provides a framework for evaluating a business. The evaluation terms include things like "Measurement, Analysis, and Knowledge Management", "Workforce Focus", "Customer Focus", and "Results."
Now this is where I have to get a little personal and base my opinions on my experiences, just a little bit. You see the Baldridge program does have a relatively small budget - so to evaluate companies it relies on volunteer "examiners." Of all the aspects of the award, I probably like the volunteer/examiner program the best.
Ideally, I should become an examiner and take the training myself - but I've met enough examiners and talked to them to have some idea of what the program entails. Suffice to say that the program evaluates companies according to a value system - to see if the companies work is stable, predictable, and measured enough that it can experiment with a change and known numerically if the results are sufficient.
The examples I have seen were around hospital wait time, and time-to-execute on certain recurring operations, like perhaps a blood draw. Yet there are significant challenges with measuring knowledge work, which is an increasingly large part of the American economy. Even if the work is repeatable, I might question if it is valuable -- for example, Barnes and Nobles had a great, stable, repeatable system to produce books in 1995 ... right up until Amazon.com came along with a disruptive innovation and took away their business model.
That idea of disruptive innovation being more and more dangerous to a business that becomes more and more highly specialized -- isn't mine alone -- it is a hallmark of modern risk management.
So sure, running your company "by the numbers" is one ideal of business management, but it's not the only one, and it bothers me that our government would institutionalize it. (This is probably the one area I know least about the program, but I am open to learning more, and I calls 'em like I see 'em.)
Should our government be doing this?
I may not be a constitutional scholar, but I've read the thing and I see that the government has certain powers elaborated in the constitution and those not elaborated are delegated to the states. I do realize that recently, as a nation, we have not paid a whole lot of attention to the document, especially it's intent. Further, I realize that the Federal Government is granted the right to regulate interstate commerce, and that power is used to justify several large government organizations like OSHA, and NIST, compared to which Baldridge is a drop in the bucket.
Further, our Federal government is one of the world's largest employers; I think we employ something like two million people, and that's before adding government contractors like LockHeed-Martin.
On the business side, I can't see a reason the Federal government would see spreading performance excellence "best practice" as within it's role. The appeal to 'national interest' seems to go against the historical reason we exist as a country -- we exist because we wanted the government out of our business. In addition, it seems to smack of centralized planning to me -- something the russians tried after the second world war -- to "decide" the "right way" to do things and to spread that out to every company, instead of letting the free market decide.
It didn't work out that great for the Russians.
We need less of this, not more. According to my value system, we need lless scripted behavior and more thinking -- the group we need to hold up as exemplars is most likely the liberal arts tradition.
But that's my opinion. I don't need government money to compete; give me a microphone called the internet and let people respond if they want to.
Conclusions
By now you realize that I'm not keen on the Baldridge award. Of course we should de-fund it. More than that, we should ask how a program like that ever got to be funded in the first place!
But that brings an interesting question. If Quality is what Paul Borawski calls "the set the concepts, techniques, and tools that connect good intention with realized and sustainable outcomes", how is it possible that we have such a different understanding the role and benefits of the Baldridge award? Wouldn't you hope we came to the same conclusion, not conclusions that were wildly different?
This is a contradiction and it's probably wise to check our assumptions.
The simplest explanation I can think of is that Paul and I have different values; that he believes that more centralized planning (or "sharing" if you have a lighter touch) combined with management by the numbers will lead to better outcomes for the United States, or even the world.
I've made my case against this worldview.
I would be pleased to see a strong reply; I'm interested in the discussion, or, possibly, an explanation of what I am missing - the benefits that the Baldrige programs adds that I am failing to take into account.
Either way, as a tiny little niche industry, I suspect we 'quality' people have a fair bit of work cut out for ourselves.
It's an exciting time to be a tester.
Wednesday, December 15, 2010
Budgets, Badges, and Badgers - II
Last time I introduced the Malcolm Baldridge Quality Award, and my doubts about it.
I wrote a serious, detailed response that I will post tomorrow. In the mean time, though, I would like to give the five-minute version. It goes something like this:
I have a number of concerns about the Baldridge award, but chief among them is the worldview it seems to be advancing: One in which the ideal business has a defined process and can be managed 'by the numbers.'
In my experience, this kind of business is *both* especially vulnerable to black swan problems, but also vulnerable to disruptive innovations. It pursues a form of maturity that I do not agree with.
I'll debate the details tomorrow - for now, Barry Schwartz's presentation "Practical Wisdom" says it all for me:
For the record: I'm with Barry.
I wrote a serious, detailed response that I will post tomorrow. In the mean time, though, I would like to give the five-minute version. It goes something like this:
I have a number of concerns about the Baldridge award, but chief among them is the worldview it seems to be advancing: One in which the ideal business has a defined process and can be managed 'by the numbers.'
In my experience, this kind of business is *both* especially vulnerable to black swan problems, but also vulnerable to disruptive innovations. It pursues a form of maturity that I do not agree with.
I'll debate the details tomorrow - for now, Barry Schwartz's presentation "Practical Wisdom" says it all for me:
For the record: I'm with Barry.
Tuesday, December 14, 2010
Budgets, Badges, and Badgers - I
I don't talk about politics much on this blog, but there are some interesting things going on right now with the USA Federal budget that seem relevant.
Consider, for example, our massive annual debt. Every politician seems to agree this is a problem -- but have you noticed that few of them have any detailed ideas on how to cut it? If you push hard they'll come up with a statement such as "going over the budget "line by line", but nobody wants to get specific.
Here's why: Every line item on the federal budget has a special interest group supporting it; that is why the line item exists. If you threaten to cut that item, you've just made an enemy of that special interest group.
Threaten to cut medicare or social security, and the baby boomers and senior citizens won't vote for you. Cut medicaid and you'll lose the disabled and lower-incomes -- same with Head Start or Welfare. U.S. unemployment is hovering around ten percent; add family members supported by unemployment, and you've just ticked off a large group of people.
In other words, if you want to cut anything, you can't get elected. So we come up with silly ideas like a federal pay freeze that will save a billion or two, but combine in with stimulus spending that adds up to hundreds of billions a year. (To help visualization, here's short video explainng the last spending "cut".)
Philosopher's call this "the tragedy of the commons." By each group lobbying for it's own individual best interest, we slowly destroy the system as a whole.
And by every special interest, I mean it. Did you know the 'quality' special interest has our own line-item?
It's called the Malcolm Baldridge Award, a federally-chartered award to recognize performance excellence in the categories of public, private, and non-profit organizations.
A little googling shows me that the Baldridge award program costs out Federal Government about twelve million dollars annually. As a taxpayer aware of the tragedy of the commons, I'd be inclined to sacrifice the award off the bat.
Then I read this blog post by the executive directory of the American Society for Quality, taking the opposite position. It made me pause and reflect.
What criteria should we use to judge the Baldridge award?
A few things occur to me. First of all, we know the cost, but what is the value? In order to make an informed decision, we would want to subtract the cost from the value -- to find out of the award is a good investment for the American people. We would want to find out if the award is good for society. If that comes out positive, I'd want to ask if the award is within the role of government -- is it the kind of thing the government should do, and, if yes, if it is the kind of thing allowed by the Federal Republic defined in our constitution.
All that said: Let's take a look.
More to come.
Consider, for example, our massive annual debt. Every politician seems to agree this is a problem -- but have you noticed that few of them have any detailed ideas on how to cut it? If you push hard they'll come up with a statement such as "going over the budget "line by line", but nobody wants to get specific.
Here's why: Every line item on the federal budget has a special interest group supporting it; that is why the line item exists. If you threaten to cut that item, you've just made an enemy of that special interest group.
Threaten to cut medicare or social security, and the baby boomers and senior citizens won't vote for you. Cut medicaid and you'll lose the disabled and lower-incomes -- same with Head Start or Welfare. U.S. unemployment is hovering around ten percent; add family members supported by unemployment, and you've just ticked off a large group of people.
In other words, if you want to cut anything, you can't get elected. So we come up with silly ideas like a federal pay freeze that will save a billion or two, but combine in with stimulus spending that adds up to hundreds of billions a year. (To help visualization, here's short video explainng the last spending "cut".)
Philosopher's call this "the tragedy of the commons." By each group lobbying for it's own individual best interest, we slowly destroy the system as a whole.
And by every special interest, I mean it. Did you know the 'quality' special interest has our own line-item?
It's called the Malcolm Baldridge Award, a federally-chartered award to recognize performance excellence in the categories of public, private, and non-profit organizations.
A little googling shows me that the Baldridge award program costs out Federal Government about twelve million dollars annually. As a taxpayer aware of the tragedy of the commons, I'd be inclined to sacrifice the award off the bat.
Then I read this blog post by the executive directory of the American Society for Quality, taking the opposite position. It made me pause and reflect.
What criteria should we use to judge the Baldridge award?
A few things occur to me. First of all, we know the cost, but what is the value? In order to make an informed decision, we would want to subtract the cost from the value -- to find out of the award is a good investment for the American people. We would want to find out if the award is good for society. If that comes out positive, I'd want to ask if the award is within the role of government -- is it the kind of thing the government should do, and, if yes, if it is the kind of thing allowed by the Federal Republic defined in our constitution.
All that said: Let's take a look.
More to come.
Thursday, December 02, 2010
Test Management Certification
So I'm trying to figure out my 2011 (and beyond!) professional development plan.
I've got a lot of ideas -- I like to try a lot of things at the same times and see what sticks.
In 2009, I started a formal, zero-profit, non-commercial school for testing known as Miagi-Do, and that has gone well. So well, in fact, that in a recent email thread on test certifications, someone wrote:
That was nice.
That got me to thinking about certifications, and risk.
Think about it the main arguments for test certification: The it reduces the risk to the company in the hiring decision, flattens expectations, maybe reduces some of the communications friction because people use the same words and know what those words mean. Mostly, though, I think it is about risk.
Test certifications create /some form/ of differentiation, allowing an HR department without discernment to winnow two-hundred resumes down to twenty with relative ease.
But that's the problem: The department lacks discernment.
So what if, instead of a test-er certificate, we came up with a test management certificate?
It wouldn't have to be limited to test managers, of course. A development manager could earn it to demonstrate his expertise in the discipline.
So I took the idea to the Rebel Alliance List (an informal group of software testers) and we kicked the idea around a bit.
Certifications have problems.
What does a certificate mean?
The words "certified", imply to me that some authority has decided something about you. So a certified test manager would mean that this authority (whoever it is) claims the person has the skills, tools and abilities to be successful in a certain role.
For some very specific jobs that are well historically defined, like plumbing, bricklaying, or for an electrician, it seems reasonable to me to have a certification.
But in testing, I've seen far too many people be successful in one environment, jump ship, and the very things that made them successful in one environment made them fail in the next.
So the best think we could do is say something like "If you company has this sort of values, and if they are doing this sort of testing, we think this person has the abilities to have success."
Nothing is ever guaranteed, of course, but I do think that within our community we could find the skills to do an evaluation to make such a statement that stands up to scrutiny.
But that kind of statement is too complex for the lazy HR person who wants to check a box.
Which, as my friend Joe Harter pointed out, is a problem - a test management certification might enable someone to be lazy, but at best that is treating a symptom, not a root cause. (I said "at best"; Joe was ... more choice in his wording.)
Conclusions
After looking into the issue seriously, I don't think a test management certification is something I can reasonably pursue in 2011. It is appealing, and I won't rule it out for the future, but it's not the top of my stack for next year, and I don't think it should be. Moreover, if you do pursue a certification, you might want to ask the people offering the cert if they have wrestled with the questions above -- and what answers they came up with.
If you get a reply that is a sort of sheepish grin, handwaving, or "mature organizations don't have those sorts of problems", well, you can probably figure out what I think of the cert.
Yet there is another, less often discussed, benefit of certification: It offers a concrete development plan, combined with some sort of sense of accomplishment. Those are good things, and important things, and I don't want to downplay them.
What I recommend instead, though, is that you write your own plan. One place to start is with reading, and piles of it. If you're here, you're in the right place, and I could drop a suggested reading list as a blog post anytime.
Why I'm thinking of that, though, is mostly from an interview I did recently with Jurgen Appello on Software Management, in preparation for his upcoming book on Management 3.0.
Or to paraphrase James Bach -- "If you can't find a certification with integrity, go certify yourself."
More to come.
I've got a lot of ideas -- I like to try a lot of things at the same times and see what sticks.
In 2009, I started a formal, zero-profit, non-commercial school for testing known as Miagi-Do, and that has gone well. So well, in fact, that in a recent email thread on test certifications, someone wrote:
I confess that all I know about Miagi-Do is that all the people who have mentioned they are Miagi-Do rated in some way are people I respect highly. This leads me to believe it is a good program.
That was nice.
That got me to thinking about certifications, and risk.
Think about it the main arguments for test certification: The it reduces the risk to the company in the hiring decision, flattens expectations, maybe reduces some of the communications friction because people use the same words and know what those words mean. Mostly, though, I think it is about risk.
Test certifications create /some form/ of differentiation, allowing an HR department without discernment to winnow two-hundred resumes down to twenty with relative ease.
But that's the problem: The department lacks discernment.
So what if, instead of a test-er certificate, we came up with a test management certificate?
It wouldn't have to be limited to test managers, of course. A development manager could earn it to demonstrate his expertise in the discipline.
So I took the idea to the Rebel Alliance List (an informal group of software testers) and we kicked the idea around a bit.
Certifications have problems.
What does a certificate mean?
The words "certified", imply to me that some authority has decided something about you. So a certified test manager would mean that this authority (whoever it is) claims the person has the skills, tools and abilities to be successful in a certain role.
For some very specific jobs that are well historically defined, like plumbing, bricklaying, or for an electrician, it seems reasonable to me to have a certification.
But in testing, I've seen far too many people be successful in one environment, jump ship, and the very things that made them successful in one environment made them fail in the next.
So the best think we could do is say something like "If you company has this sort of values, and if they are doing this sort of testing, we think this person has the abilities to have success."
Nothing is ever guaranteed, of course, but I do think that within our community we could find the skills to do an evaluation to make such a statement that stands up to scrutiny.
But that kind of statement is too complex for the lazy HR person who wants to check a box.
Which, as my friend Joe Harter pointed out, is a problem - a test management certification might enable someone to be lazy, but at best that is treating a symptom, not a root cause. (I said "at best"; Joe was ... more choice in his wording.)
Conclusions
After looking into the issue seriously, I don't think a test management certification is something I can reasonably pursue in 2011. It is appealing, and I won't rule it out for the future, but it's not the top of my stack for next year, and I don't think it should be. Moreover, if you do pursue a certification, you might want to ask the people offering the cert if they have wrestled with the questions above -- and what answers they came up with.
If you get a reply that is a sort of sheepish grin, handwaving, or "mature organizations don't have those sorts of problems", well, you can probably figure out what I think of the cert.
Yet there is another, less often discussed, benefit of certification: It offers a concrete development plan, combined with some sort of sense of accomplishment. Those are good things, and important things, and I don't want to downplay them.
What I recommend instead, though, is that you write your own plan. One place to start is with reading, and piles of it. If you're here, you're in the right place, and I could drop a suggested reading list as a blog post anytime.
Why I'm thinking of that, though, is mostly from an interview I did recently with Jurgen Appello on Software Management, in preparation for his upcoming book on Management 3.0.
Or to paraphrase James Bach -- "If you can't find a certification with integrity, go certify yourself."
More to come.
Monday, November 22, 2010
One way to transition to agile
There's been a little bit of discussion in the blog-o-sphere lately about how to "sell" agile, or how to convince senior managers to "adopt" agile, how to get buy-in and so on.
I pushed back against this; couldn't you just do it? Does Senior Management even know what the software team is doing, anyway?
The answer to that was that Agile is an investment; a typical team might take six to eight months to build infrastructure (CI, Test Driven Development, deployment tools, etc) and culture -- eight months before the team is again productive. (Reference)
Now there are a couple of different ways to transition to Agile; a team might, for example, make a series of small, incremental changes, each of which pays for itself quickly. But I thought it might be nice to share my favorite "sell the CFO" transition to Agile Story:
A long time ago ...
Senior Manager: "We need you to be the technical project manager on the Super-wiz ERP upgrade, Dave, so we can sell (new product) by (date). If we aren't int he market by (date) (competitor) will eat our lunch. Due to government regulation, we need to file a plan by (date1) and start selling on (date2), or we miss a ONE YEAR market window, by which competitor will have sewn up the market. Dave, you are the man. Only you can do it."
Dave: "I need a war room and for the entire team to be physically co-located, 100% of the time."
Senior Manager: "Well, I don't know about that."
Dave: "If you want me to have a chance to hit your date, I need a war room."
Senior Manager: "I'm sure you can do it. We have confidence in you."
Dave: "If you want me to have a chance to hit your date, I need a war room."
Senior Manager: "Dave, politically speaking, it's impossible. I could probably get you all the technical folks in one room, but then we'd have to find the room. No, you'll have to make it work.
Dave: "If you can't find a war room, then I'm not the person to manage this project. Perhaps you can find someone else willing to take that problem on. I am not."
Senior Manager: "But you're the best! There is no one else who can do this."
Dave: "I need a war room."
Dave got his war room.
The Moral of the Story
One time that organizations are willing to drop "the way we always do it" and try something new is immediately before an oncoming crisis. If you step into the void and offer to take responsibility given certain reasonable changes, you've got a real shot at impacting long-term change.
Another way to get chance is to deal with an organization that is profitable enough that they can experiment.
The classic example of the intersection of those two problems is the creation of the IBM Personal Computer.
The Scrum literature is full of examples of this sort of game-changing project; the classic example is probably in "Wicked Programs, Righteous Solutions" by DeGrace and Stahl.
I pushed back against this; couldn't you just do it? Does Senior Management even know what the software team is doing, anyway?
The answer to that was that Agile is an investment; a typical team might take six to eight months to build infrastructure (CI, Test Driven Development, deployment tools, etc) and culture -- eight months before the team is again productive. (Reference)
Now there are a couple of different ways to transition to Agile; a team might, for example, make a series of small, incremental changes, each of which pays for itself quickly. But I thought it might be nice to share my favorite "sell the CFO" transition to Agile Story:
A long time ago ...
Senior Manager: "We need you to be the technical project manager on the Super-wiz ERP upgrade, Dave, so we can sell (new product) by (date). If we aren't int he market by (date) (competitor) will eat our lunch. Due to government regulation, we need to file a plan by (date1) and start selling on (date2), or we miss a ONE YEAR market window, by which competitor will have sewn up the market. Dave, you are the man. Only you can do it."
Dave: "I need a war room and for the entire team to be physically co-located, 100% of the time."
Senior Manager: "Well, I don't know about that."
Dave: "If you want me to have a chance to hit your date, I need a war room."
Senior Manager: "I'm sure you can do it. We have confidence in you."
Dave: "If you want me to have a chance to hit your date, I need a war room."
Senior Manager: "Dave, politically speaking, it's impossible. I could probably get you all the technical folks in one room, but then we'd have to find the room. No, you'll have to make it work.
Dave: "If you can't find a war room, then I'm not the person to manage this project. Perhaps you can find someone else willing to take that problem on. I am not."
Senior Manager: "But you're the best! There is no one else who can do this."
Dave: "I need a war room."
Dave got his war room.
The Moral of the Story
One time that organizations are willing to drop "the way we always do it" and try something new is immediately before an oncoming crisis. If you step into the void and offer to take responsibility given certain reasonable changes, you've got a real shot at impacting long-term change.
Another way to get chance is to deal with an organization that is profitable enough that they can experiment.
The classic example of the intersection of those two problems is the creation of the IBM Personal Computer.
The Scrum literature is full of examples of this sort of game-changing project; the classic example is probably in "Wicked Programs, Righteous Solutions" by DeGrace and Stahl.
Sunday, November 21, 2010
The Drake Equation of Software Testing
In the 1960's a scientist named Frank Drake came up with a formula to predict the probability of life on other planets -- specifically the chance they would evolve to the point that we could contact them. That equation came to be known as the Drake Equation.
The drake equation is roughly this:
N = R * f(p) * n(e) * f(e) * f(l) * f(i) * f(c) * L
Where:
N = the number of civilizations in our galaxy with which communication might be possible;
R = the average rate of star formation per year in our galaxy
f(p) = the fraction of those stars that have planets
n(e) = the average number of planets that can potentially support life per star that has planets
f(l) = the fraction of the above that actually go on to develop life at some point
f(i) = the fraction of the above that actually go on to develop intelligent life
f(c) = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time such civilizations release detectable signals into space.
This sounds impressive. I mean, if we could just determine those other variables, we can determine the chance of life on other planets, right?
But ... Wait
It turns out that Drake's equation really says that one unknown number that can only be guess can be calculated as a function of seven numbers ... that we don't really know either and can only be guessed. (And if you try, you run into Kwilinski's law: "Numbers that are 'proven' by multiplying and dividing a bunch of guesses together are worthless.")
Now think about this: As an actual menaingful number, drake's formula is pretty useless. If any of the guesstimates are off by a wide margin (or you can't even really predict them at all), then your answer is a non-answer. Worse than no answer, a wrong answer wastes your time and pushes you toward bad decisions.
Yet what if we didn't try to come up with a 'solid' number, but instead use Drake's equation as a modeling tool -- to help us better understand the problem? To help us figure out what questions to ask? To guide our research?
Suddenly, Drake's equation has some merit.
Back to software testing
Over the next few months I plan on doing some work in the area of the economics of software development -- testing specifically, but also other aspects. To do that work, I intend to throw out some illustrative numbers to help model the problem.
I don't claim that those numbers are "right", nor that any numbers are "right"; the economic value of the software project will develop on what the project is, who the customer is, what the technology stack is, the value of the staff, the time in history ... illustrative numbers are overly simplistic.
So I'm going to abstain from "proving" final answers using those numbers, instead using them as illustrative numbers, to tell as story - that it could be conceptually possible for a certain technique to work if things turned out like the example.
With that foundation model in place, we can make different decisions about how we do our work, and see how it impacts the model.
I want to be very clear here: I'm going to throw up ideas early in order to get feedback early. The ideas I throw out may be cutting edge -- they will certainly be wrong, because all models are wrong. But they might just have a chance to positively impact our field.
I figured it's worth a try.
Plenty more to come, both here and on the STP Test Community Blog.
The drake equation is roughly this:
N = R * f(p) * n(e) * f(e) * f(l) * f(i) * f(c) * L
Where:
N = the number of civilizations in our galaxy with which communication might be possible;
R = the average rate of star formation per year in our galaxy
f(p) = the fraction of those stars that have planets
n(e) = the average number of planets that can potentially support life per star that has planets
f(l) = the fraction of the above that actually go on to develop life at some point
f(i) = the fraction of the above that actually go on to develop intelligent life
f(c) = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time such civilizations release detectable signals into space.
This sounds impressive. I mean, if we could just determine those other variables, we can determine the chance of life on other planets, right?
But ... Wait
It turns out that Drake's equation really says that one unknown number that can only be guess can be calculated as a function of seven numbers ... that we don't really know either and can only be guessed. (And if you try, you run into Kwilinski's law: "Numbers that are 'proven' by multiplying and dividing a bunch of guesses together are worthless.")
Now think about this: As an actual menaingful number, drake's formula is pretty useless. If any of the guesstimates are off by a wide margin (or you can't even really predict them at all), then your answer is a non-answer. Worse than no answer, a wrong answer wastes your time and pushes you toward bad decisions.
Yet what if we didn't try to come up with a 'solid' number, but instead use Drake's equation as a modeling tool -- to help us better understand the problem? To help us figure out what questions to ask? To guide our research?
Suddenly, Drake's equation has some merit.
Back to software testing
Over the next few months I plan on doing some work in the area of the economics of software development -- testing specifically, but also other aspects. To do that work, I intend to throw out some illustrative numbers to help model the problem.
I don't claim that those numbers are "right", nor that any numbers are "right"; the economic value of the software project will develop on what the project is, who the customer is, what the technology stack is, the value of the staff, the time in history ... illustrative numbers are overly simplistic.
So I'm going to abstain from "proving" final answers using those numbers, instead using them as illustrative numbers, to tell as story - that it could be conceptually possible for a certain technique to work if things turned out like the example.
With that foundation model in place, we can make different decisions about how we do our work, and see how it impacts the model.
I want to be very clear here: I'm going to throw up ideas early in order to get feedback early. The ideas I throw out may be cutting edge -- they will certainly be wrong, because all models are wrong. But they might just have a chance to positively impact our field.
I figured it's worth a try.
Plenty more to come, both here and on the STP Test Community Blog.
Thursday, November 18, 2010
Types of testers and the future - I
Ok, ok, it's not literally called boutique tester. But listen to this:
You: someone who can make guesses about how a website or application is going to fail, prove that you're right, and then communicate it clearly and effectively to the folks who need to fix it. Ideally, you also have the ability to predict the things about our products that will confuse or dismay a new customer.
As an example of what we do, imagine we have four months to write, test, and ship two applications on a brand-new hardware platform (with no specimens of said hardware in the building) while still updating and maintaining our released products on both platforms. Could you keep up without going totally crazy in the process?
If so, we'd like to hear from you — we're looking to expand our QA department by adding another Software Test Pilot.
Here's the ad, along with the companies "Jobs" site. The position is in Seattle, Washington, and I suspect the position is everything it claims to be.
About the future of software testing
For the past couple of year I have been engaged in a sort of shadow-boxing match of ideologies with some folks. Predominantly in the United States it has been with the idea of the tester/developer - that "the tester of the future" /will/ be writing test automation code, not doing actually testing, and the customer-facing tester will "go away."
Now certainly, I grant that testers will become 'more' technical over the next decade or so, but then again, our entire society is becoming more technical.
I still think it will take a variety of skill sets to find defects on different types of projects. We might have more tester/dev-ers in certain places, but I think "going away" is a bit strong.
In fact, I believe there are some simple economic conditions that make a boutique tester a valid choice for a company like "the omni group."
More to come. Or maybe more on test estimation to come. Either way, I'm writin' again. :-)
You: someone who can make guesses about how a website or application is going to fail, prove that you're right, and then communicate it clearly and effectively to the folks who need to fix it. Ideally, you also have the ability to predict the things about our products that will confuse or dismay a new customer.
As an example of what we do, imagine we have four months to write, test, and ship two applications on a brand-new hardware platform (with no specimens of said hardware in the building) while still updating and maintaining our released products on both platforms. Could you keep up without going totally crazy in the process?
If so, we'd like to hear from you — we're looking to expand our QA department by adding another Software Test Pilot.
Here's the ad, along with the companies "Jobs" site. The position is in Seattle, Washington, and I suspect the position is everything it claims to be.
About the future of software testing
For the past couple of year I have been engaged in a sort of shadow-boxing match of ideologies with some folks. Predominantly in the United States it has been with the idea of the tester/developer - that "the tester of the future" /will/ be writing test automation code, not doing actually testing, and the customer-facing tester will "go away."
Now certainly, I grant that testers will become 'more' technical over the next decade or so, but then again, our entire society is becoming more technical.
I still think it will take a variety of skill sets to find defects on different types of projects. We might have more tester/dev-ers in certain places, but I think "going away" is a bit strong.
In fact, I believe there are some simple economic conditions that make a boutique tester a valid choice for a company like "the omni group."
More to come. Or maybe more on test estimation to come. Either way, I'm writin' again. :-)
Tuesday, November 09, 2010
CAST 2011 - August 8, 2011, Seattle Washington
If you've been waiting with breathless anticipation for announcements on CAST 2011, wait no more: It's August 8-10 at the Lynwood Convention Center in Seattle, Washington.
Note: I am not running the conference, and this is not some sort of official announcement. I have been, however, monitoring the intarwebs closely, waiting for such an announcement, and wanted to be the first to link to it once it came out. :-)
You heard it here first, folks.
Alternatively, maybe you didn't, in which case, I admire your testing-web-search-foo.
So perhaps I should say "you probably heard it here first?"
I am unaware of a Call for Proposals. As soon as I know of a public one, I'll link to it.
See you in Seattle in about nine months?
Note: I am not running the conference, and this is not some sort of official announcement. I have been, however, monitoring the intarwebs closely, waiting for such an announcement, and wanted to be the first to link to it once it came out. :-)
You heard it here first, folks.
Alternatively, maybe you didn't, in which case, I admire your testing-web-search-foo.
So perhaps I should say "you probably heard it here first?"
I am unaware of a Call for Proposals. As soon as I know of a public one, I'll link to it.
See you in Seattle in about nine months?
Subscribe to:
Comments (Atom)