Schedule and Events
Thursday, November 30, 2006
He suggested the alternative of becoming CEO of something else. A programming language. A platform. A tool. Become known as a contributor or thought leader in a specific space, and if you live in a reasonably-size city, then at least having a job - even a good one - is pretty much guaranteed. At that point, your options really start to open up - such as consulting, writing, teaching, starting your own company, or just retiring early.
I think this plays well with the end of the blockbuster, because there are now a lot of niche fields within software development - even within software testing. Requirements Engineering.
So how do you find your niche? Chad suggested technologies - either ones so new that the market is small (think ruby in 2003) and the big companies can't compete, or ones so old that the market is small (cobol, powerbuilder) and the big companies can't compete. Chad suggested that if you bet on a new technology and it takes off (java in 1996), you could become an "expert", which is the slingshot to fame and fortune.
Now, that's a perfectly reasonable strategy, but it's not mine. It feels ... mechanical to me, and I'm not in love with technologies. So I suggest a different route:
If you find that you enjoy a particular subject, that you've read all the books on the subject, and, at lunch, invariably, you are the local expert - then you're already on the path.
Don't find a technology - find a passion. When you've read all the books and find insights between them; when you disagree with some of the authors ... when you feel you have something to contribute ... when you find yourself in a tiny little community of people who really care about the subject ... suddenly you've found a niche.
Once you've found a niche, the next question is how to gain "recognition" for expertise in your niche.
On it's face, that's pretty selfish.
The good news is that in order to be recognized for being good, like Linus Torvalds or that other guy, you have to contribute - you have to give something away.
Sure, you can be recognized for having written a book or five, or having a PhD, or a bunch of other things, but I submit that those kinds of things ("marketing") are separate and distinct from being good. They may coincide; they often do, but there are a lot of people who want to be famous in the world of software testing, and a few who want to do good work and be recognized.
If you've ever read a book that was bone-dry, full of platitude, obvious, and included a bit of impressive hand-waving pseudo-science, it was probably written by someone who wanted to have written a book, instead of wanted to contribute to the community.
So, you've found your niche, but you aren't a developer who's niche is open source, and you don't want to be a self-appointed blowhard. How do you contribute?
I've got a bunch of ideas, but they can wait until tomorrow. In the mean time, what do you think?
(*) - Jerry Weinberg, "Rethinking System Analysis and Design", and Eli Goldratt, "The Goal"
Wednesday, November 29, 2006
Really, I am unprofessional. I've been told by friends and people I respect that my website is terrible. Unprofessional. Ugly. How dare I have a website without CSS?
None of those people have a website
My blog is hosted at blogspot. Gosh. Awful. I should have my own viritual host (I do) and manage my own subdomain under mod_perl so that I could have blog.xndev.com. After all, that's the right way to do it.
None of those people have a blog
I'm doing things in the wrong order. First, I need to write a book, then I need to do public speaking. Why, by then, the companies will come to you, Matt! Stop wasting time on your blog; nobody reads it anyway.
None of those people have written a book, or presented at a major conference
What's going on here?
Two hundred years ago, Voltaire said "The Perfect is the Enemy of the Good"
Twenty Years ago, Richard Gabriel wrote "Worse is Better"
Right about now, Matt Heusser is writing "Good enough is the enemy of done"
Or, to put it another way - I value working software over comprehensive design; I like to deliver working software often, and improve it incrementally.
I do intend to improve my website. Eventually, I would like to get Creative Chaos hosted somewhere else. The fundamental difference is one of approach: Instead of a Diamond-Like Jewel - that is never finished - I intend to release systems that are good enough and improve them incrementally.
If you are not convinced, that's fine. To each his own, but I wanted to get this down. Still, think about this: When the 37Signals guys released the first version of basecamp, the radically successful web-based project management tool, they didn't have any billing system in place.
They didn't care, because the only thing you could sign up for was a 30-day trial membership, which was free.
They figured that gave them 30 days to write the billing and payment system.
How ... unprofessional.
Wait, aren't they like, multi-millionaires now?
I mean everyone. Science Fiction Fans watched it; thrill seekers and horror fans watched it (it had a very dark "joker"); kids who played with action figures watched it - though they probably should not have. Mom watched it (romance and drama), Dad watched it, teenagers watched it - Batman had mass commercial appeal.
Batman came out in 1989 - just before the onrush of the Internet and the influx of cable TV stations - back when, for the most part, MTV played music videos.
Back then, people simply had less options. Today, thanks to the Internet, cable TV, and satellite, each and every niche can find exactly what they want, and not have to settle for "good enough."
That means market segmentation - instead of going for everyone, the disney channel can aim for children aged 4-15, and be very profitable; and NickJr can under-cut them for the 2-6 market.
At the same time, as people demand better and better special effects, blockbusters will cost more and more to produce - but return less and less in results. Sci-Fi fans will watch the sci-fi channel; drama fans have the soap opera channel; thriller fans will watch the thriller channel - and the list goes on.
This means there are less opportunities for a $100 million opening weekend (as GodZilla proved in 1999 and Casino Royale is proving right now), but there are more opportunities for the tiny niche provider.
Then again, less money is just fine for a 20-person cable channel.
While I don't know the cost of a cable channel, I do know the cost of a blog, and I know that there are plenty of people who are self-employed thanks to blogging, and a few who are self-employed pod casters. (But most of those have mass appeal, such as credit card sites or reviews of digital cameras. Bummer for me, eh?)
What this means for the world of software testing is that the day of the 900-pound gorilla "we provide everything" software testing company is probably going to go away. The big body shops will continue to want to place 20-people on-site for an 18-month contract when we just have one interesting problem to solve, and the big tool providers will continue to make big testing suites when we only want to do one thing and do it well.
For the small company, this is the smell of opportunity.
I haven't seen many product companies adopt this model -- yet. Last year at STAREast I met David Gilbert, a software tester who owns a tiny little company called Sirius-SQA and makes a product that enables exploratory testing called TestExplorer.
He is trying to do one thing and do it well, and I wish him the best.
I have seen some individual consultants adopt this approach - Scott and Mark, the performance testing guys, Johanna Rothman does software management, Ron Jeffries does agile coaching.
Each of these niches is so small that the 900-pound gorilla companies can't compete - it just is not worth the time and investment to develop expertise in a market for three people when you are trying to place three thousand.
Even people in the Ruby/Rails community are talking about this strategy; last night I heard Chad Fowler recommend it at XPWestMichigan. Chad wrote "My Job Went to India" and, more recently "Rails Recipes." He's also a pretty smart guy; as soon as the video is up on the web, I'll link to it here.
More on this tomorrow ...
Tuesday, November 28, 2006
If you'd like a short introduction to some deeper issues in software testing, you could read my blog for six months, or, well, check out his post. Seriously, it's good.
A couple of my favorite quotes:
Testers should not try to design all tests for reuse as regression tests. After they’ve been run a few times, a regression suite’s tests have one thing in common: the program has passed them all. In terms of information value, they might have offered new data and insights long ago, but now they’re just a bunch of tired old tests in a convenient-to-reuse heap. Sometimes (think of build verification testing), it’s useful to have a cheap heap of reusable tests. But we need other tests that help us understand the design, assess the implications of a weakness, or explore an issue by machine that would be much harder to explore by hand. These often provide their value the first time they are run—reusability is irrelevant and should not influence the design or decision to develop these tests.
Forcing people to do make a tests regression-runnable, is, in my experience, often a thinly-vield excuse for the cult of "document everything." At the same time, if you can decrease the cost of documentation, you can run more testss - and more tests means better software - and the best CYA is to not have the bug ship to the customers. For years, I have kept hearing things like "If you don't document it, it didn't happen"; my typical reply is "If you do document it, and stick it in a drawer, you just wasted your time."
Dr. Kaner also wrote:
The focus of system testing should shift to reflect the strengths of programmers’ tests. span>Many testing books (including TCS 2) treat domain testing (boundary / equivalence analysis) as the primary system testing technique. To the extent that it teaches us to do risk-optimized stratified sampling whenever we deal with a large space of tests, domain testing offers powerful guidance. But the specific technique—checking single variables and combinations at their edge values—is often handled well in unit and low-level integration tests. These are much more efficient than system tests. If the programmers are actually testing this way, then system testers should focus on other risks and other techniques. When other people are doing an honest and serious job of testing in their way, a system test group so jealous of its independence that it refuses to consider what has been done by others is bound to waste time repeating simple tests and thereby miss opportunities to try more complex tests focused on harder-to-assess risks.
We allmost got into this a few weeks ago in Indiana, but I didn't take the bait. I probably should have; we could have learned something from each other. I would put it slightly differently:
If your developers are doing automated tests, and you find that a test technique (such as bounds testing) isn't finding any bugs, it's because the devs are covering it. So you should probably shift your focus away from a technique that isn't yielding results to focus on things that provide a better return. Maybe not entirely, but a shift is called for.
Come to think of it, if you are using any test technique and not getting results, it's probably because the software seems to work in that way, so try something else.
Fifteen years ago, as a cadet in the Civil Air Patrol, I sat through a class where they explained this principle. In a missing aircraft search when you have a rader hit, the probability of discovery decreases the further you get from the rader hit. The Mission Coordinator can calculate the probability of discovery (POD), and when you've searched the close areas enough that the POD is up, you start to believe that the plane isn't close, and so you move the search parties further out.
The application to testing is an exercise for the reader. :-)
In the mean time, check out Cem's post ... more later.
Sunday, November 26, 2006
And yes, I know that's a lame explanation. I still haven't told you what the Blue Man Group does, or, the obvious reason to post - how that should or could affect the way we do presentations on software development.
Here's the first obvious part - Blue Man combines watching with doing - or at least, they create the illusion of it. It makes the entire experience more memorable, and, as I see it, a big part of the problem with the "drive by training" we do in software today is that it is entirely disconnected from what we do on Monday. Disconnected from who we are. That might be a good way to make money, but it's a lousy way to help an industry improve.
So, here's the second thing - Blue Man Group is going to be on SCRUBS on Thursday, November 30th. So before I bother to try to explain what they do, check out the TV show. If the TV show doesn't reflect the group, we'll there are a few blue man links available on video.
Also, I think the post on the requirements problem is pretty good, so if you haven't read it, please, scroll down ...
Tuesday, November 21, 2006
Ok, let's do some critical thinking on that. Why are requirements hard?
In my honest opinion, the skill set to do requirements is a combination of writing skills, an understanding of the problem domain, and an understanding of technology. To paraphrase Jerry Weinberg, it's not that you have to analyze requirements (break into component pieces) - it's that you have to synthesize them - get them to play nice together.
I would like to talk about that - and do it in an interesting way.
So here is the first Execelon Development PodCast (10MB) and also a handout to help follow along.
Since most of the success literature approaches requirements from a customer viewpoint, the podcast talks about requirements from a developer's viewpoint.
The fact that different people use the same document for different purposes is an entirely different problem; perhaps that's a follow-up.
Still, All too often is not the same as "always."
Scott Ambler has an interesting article on architecture in this month's Agile Journal. His take seems to be that if your team is big enough then you must do some architecture. In that case, the challenge is to keep the architecture light and grounded in reality.
Of course, Dr. Kaner is a lawyer. :-)
The great thing about the article (which I have printed off to explore in depth) is that it is comprehensive. Perhaps when the issues comes up again, I can find a way to politely ask "Have you read the Kaner paper on the subject?" Of course, he has published others.
This brings up an interesting question: So far, I've been listing interesting resources here as I find them. It would be neat to have some sort of categorization scheme to make them available quickly; something like Brett Pettichord's Software Testing Hotlist.
Sunday, November 19, 2006
Now, again, don't get me wrong(*) - Metrics can be very helpful, but they need to be explained in context. For example, let's say that the data points to less and less defects found per week, or even per day. Why, that's a "converging trend line on defects open" - the product is ready to ship!
Or, it could just be the month of December, when everybody took vacation.
Or maybe a tester quit, and the lead and supervisor are spending a lot of time interviewing and no time testing - without context, the uninformed reader begins to make up explanations for the behavior of the data. That can be very, very bad.
What bugs me the most about metrics, and, well, I'll be brutally honest here - is the purpose they seem to serve.
As they are presented in the textbooks (and I have read a lot of them), Metrics make things easier. After all, once we define "Good" and put performance metrics in the way, then all the decision maker has to do is breathe easy (when the numbers keep going higher) or make a stink (when they don't.)
Now, read that paragraph again. I submit that in some cases, it's not really about making the job easier - it's about creating a situation where people don't have to think. After all, they can just manage to the numbers.
When I think of the context of software development, everytime I can think of a situation where someone was clinging to an idea because the alternative was scary and involved thinking for yourself, it has gone bad. CMM, UML, RUP, CASE, record/playback testing, Agile ... take your pick - when the motivation was to solve all our problems and avoid thought ...
Well, people got what they deserved.
Doesn't anybody read Fred Brooks anymore? :-)
(*) - (the 3 sentances = 1000 words principle kicks in)
Thursday, November 16, 2006
http://www.glsec.org/pages/Presentations_2006. There is also Audio for Carl Erickson's keynote and most of Tim Lister's Keynote.
Both keynotes were good; Lister's audio starts off with the slide with the bull on it.
Wednesday, November 15, 2006
To start, Matt Heusser wrote:
Yesterday, I got into a discussion with an Agile tester (Capital-A). Well, wouldn't call it a discussion, really. He said something and I got upset. I mean really upset; the blood rushed to my face, my breathing got faster, my fight-or-flight reaction kicked in, I may have said something I should regret - I don't remember because fight-or-flight was in such fast gear.
What did he say? He declared a given testing practice, a practice that I have used and will continue to use, was "Wrong."
To which Ron Jeffries Replied:
I would like to note, respectfully, that my view isn't that declaring the practice wrong ended the conversation. My view is that you ended the conversation by losing control of yourself. To rephrase what you wish he had said, we can come up with something you might wish that you had said:
A context-driven tester, being told that some practice he uses is "Wrong", might say something like "In my experience, on the projects I have worked on, the benefits that I have seen from that practice have always far outweighed the risks."
Except that, most of the time, they don't. Most of the time, the idea that this specific practice is "wrong" is perfectly fine. It's just that the practice has boundary cases where a tester with experience and good judgement can see that the benefits _do_ outweigh the risks.
Well, of course, you'd have to say whatever you believed. My point was only that you had the option of not breaking off the conversation just because the other guy was saying somethingdistressing. And, of course, knowing you, I'm not surprised that you went back later.
I'm not anti-agile, in fact, I like agile, even with a capital-A. I am instead concerned about context-imperialism; where you take a solution that maps well in a specific context and try to apply it to all contexts.
Yes, I understand that concern. It's important to recognize that weall have some context within which we're comfortable, and outside which we are more inclined to panic. When I was starting with XP, Iwas very afraid to move far beyond the basic practices, because I didn't know what might happen, and within them I felt safe. Now I have a wider range.
As for losing control - perhaps my post was a little over the top. What I actually did was mumble something about context-driven-testing and extricate myself pretty fast. After I had regained composure, I had a discussion very similar to the one you described, (I started out with "Yesterday, when you said this practice was wrong, I think you really meant ...") It went very well. Could I learn to be more thick skinned? Certainly.
Yes. I'm not surprised that you went back, and I'm proud of you fordoing so. (Not that I have any right to be. Still, I am.)
As for being thick-skinned, I'm not sure I'm recommending that. I think it's important to feel ... and then to act in the way most slated to get us a productive outcome.
I'm reminded of what A.E. van Vogt called the "cortical-thalamicpause", a pause that we take between when the thalamus triggersfight or flight, and when the cortex has time to come up with amore sensible response. (I think he picked up the term from some thinker of that time, but I'm not on line to goog it out.)
Whatever we call it, if we can take the moment, either standing there or coming back after walking away, it seems to me that we're more likely to get "what we want" out of the transaction.
I still believe that context-imperialism can be dangerous; it can lead to ignorning very good options while having unrealistic confidence in others.
No doubt. The more aware we are, the better we'll do.
Feel free to copy both these to the group. I think it's interesting stuff, and I bet that others will as well.
If it is more than you need, it is waste. -- Andy Seidl
Friday, November 10, 2006
Two of the more interesting booklets to me were the booklet on Skills and the booklet on Knowledge.
First of all, I was happy that the DOL differentiates the two concepts. Many of the testing certification programs simply prove that the reader has memorized terminology or definitions - that they have knowledge. The multiple-choice exam behind the certifications rarely (ever?) test for skill.
The skills survey was especially fun. First, the survey asked how important the given skill was to software testing on a scale of 1 to 5. If you scored anything above 1 (not important), the follow-up question asked "What level of skill (1-7) is needed to perform this job?"
There were descriptions by some levels. Check out my examples below, and think about what skill level you think should be needed to perform the job Software Quality Engineer/Software Tester:
2 - Determine whether a subordinate has a good excuse for being late
4 - Evaluate customer complaints and determine appropriate responses
6 - Write a legal brief challenging a federal law
2 - Proofread and correct a letter
4 - Monitor a meetings progress and revise the agenda to ensure that important topics are discussed
6 - Review corporate productivity and develop a plan to increase productivity
Complex Problem Solving
2 - Lay out tools to complete a job
4 - Redesign a floor layout to take advantage of new manufacturing techniques
6 - Develop and implement a plan to provide emergency relief for a major metropolitan area
2 - Write a program in BASIC to sort objects in a database
4 - Write a statistical analysis program to analyze demographic data
6 - Write expert system programs to analyze ground radar geological data for probable existence of mineral deposits
Quality Control Analysis
2 - Inspect a draft memorandum for clerical errors
4 - Measure new part requisitions for tolerance to specifications
6 - Develop procedures to test a prototype of a new computer system
2 - Determine why a coworker has been overly optimistic about how long it would take to complete a task
4 - Identify the major reasons why a client might be unhappy with a product
6 - Evaluate the long-term performance problem of a new computer system
2 - Greet tourists and explain tourist attractions
4 - Interview applicants to obtain personal and work history
6 - Argue a legal case before the Supreme Court
A couple of things struck me. First of all, the descriptions for #6 were all people at the very top of the field; arguing cases before the supreme court or developing personnell and promotion systems for the United States Army. Oh, and software testing. Apparently, that is the most challenging of the critical analysis skills - or, at least, it was the perception of the authors of this survey.
That got me to wonder what kind of jobs would require level seven skills to perform. Writing at level 6 is "write a novel"; I suppose level 7 would be family-name recognition, if not generational recognition. (Mark Twain)
Second, about half-way through the test, I began to check myself. Generally, it only took about level three or four skills to get and keep a job as a software tester, but I think it takes level 5 or 6 to do it well.
All of the great software testers I can think of offhand are level 6 in at least one of the attributes on the list. Wait, software testing is level 6 on a couple of the skills. Doh! ..
This year, oNet asked me to particpate as an "Occupation Expert." (Mostly because I am a member of the Software Division of the American Society for Quality and have a few years of applicable experience. )
This means that in trade for the title of "Occupation Expert" about software testing (James Bach, eat your heart out), I spent about an hour filling our surveys to help define the role.
Oh, I also got a small honoraium, a certificate, and, um ... a clock. The big thing was the title.
The surveys consisted of one book specific to that job, followed by workbooks on activities, skills, knowledge, work context, and my background. You can see the surveys online.
The very act of filling out the forms forces you to think about what is required to be a software tester, and that made consider what it takes to be a good one. More about that next time ...
Wednesday, November 08, 2006
James Bach is an independent consultant and a leading member of the context-driven school. He just put out a post on the subject last week. You can read it here.
Bach ended the post with this question:
Any decent context-driven thinker can cite at least three scenarios where taking the context-driven attitude would be a bad idea. Can you think of them?
I'll give my answer, but I think there's more to it than that.
1) Some companies care more about being stable, predictable, and repeatable than they care about being _good_. Some have even made a reputation offering products and services that may not be great, but are the same, every time. Most Brand-Name American Fast Food comes to mind. No matter where you eat it, a Big Mac is the SAME - every time. Trying to examine the situation and determine the best thing to do is not what the manager wants you to do; he wants you to follow the operations book.
(#2 is not politically correct, but it is my life experience)
2) While the American Front Line Combat Soldier is one of the best trained in the world, there are times when the soldierer needs to obey all orders immediately and to the best of his ability, because there simply is no time to think. Two things that come to mind are when the squad leader points and yells 'Run!' (which probably means incoming mortars or artillery) or the invasion of Normandy.
Actually, those situations are rare; the American fighting soldier is trained to think and act quickly, to be situationally aware. That said, there will always be a time when an NCO makes a statement with authority and it needs to be obeyed unquestionably or people may die. (This is where trust comes in)
3) Sometimes authors and speakers need to use a bit of hyperbole when making a point. For example 'Any decent context-driven thinker can cite at least three' sounds a lot more emotionally appealing that 'It has been my experience that any decent' or 'All of the decent context-driven thinkers who I have met have' We see the point. (Ok, that one was for grins.)
4) If someone else is falling into a trap of one of the other schools, engaging them in a discussion may fail. What you can do that point is to take a logical position from a different context and adopt it into the current problem. The other person may immediately recognize that you are making a mistake, criticize you for it - and have an 'aha' moment.
5) Sometimes I like to do context-oblivious manual labor to relax my brain.
6) There are some things to which we are context-specific, like 'I am breathing oxygen', that, most of the time are not worth investing our time in thinking about. Of course, throw me in a lake and bind my legs, and my thinking will change.
There's more to it than that
James is doing something really interesting here. This is, in some way, a sort of meta-testing issue; to analyze the ideas he sets to find some execeptions. It's very much like how Alistair Cockburn describes Shu-Ha-Ri, which I have to admit, I originally thought was pseudo-philsophy-babble but am beginning to re-examine. Personally, I'm still more comfortable with the three stages of knowledge or the three levels of audience.
Tuesday, November 07, 2006
"I start with three sentences. Then I imagine all the ways that those three sentences could be misunderstood, and I build up defenses and arguments against those misunderstandings. Once I'm done, I've got a 1000-word essay or an hour of material, take your pick."
So, in a previous post I said that heavyweight methods can stifle creativity. I suggested a creative, chaotic process, much like W. W. Royce does at the end of his paper "Managing the Development of Large Software Systems" - viewable here. (Ironically enough, it is the First Page of Royce's paper that is credited for inventing the concept of the waterfall model. That's a good place to start - but for goodness sake, don't stop there!)
I left out a bunch of assumptions, like your team is staffed with great people. If you don't have a team of experienced and good people, when you eliminate the binders and templates, the team won't know what to do. Inexperienced you can do something about; break the problem down into smaller chunks and give them guidance. With a team that isn't good ... Wow. Completely different problem.
Here's my take on great people: Great people are all methodologists with a lower-case "m." They have a wide and deep set of methods, and the good judgment and discernment to choose the right method to use in the moment - something that no capital-M methodologist sitting in an office 1,000 miles away is going to be able to do.
Although I've published an article or two on methodology, how I think about methodology is slowly changing. Written today, my "Methodology" would have an emphasis on hiring great people, developing talent and teamwork that I find completely absent from your typical big thick binder. Yes, with XP, Crystal, and a few of the Agile Methods, it's implied, but it is still rarely explicit.
Still, I know of at least one organization that manages this way:
They seem to do ok at it. hmm.
Monday, November 06, 2006
He recent went to Google's Mountain View(*) campus to present to the staff on "how to become a testing expert."
You can view the video on on google directly - here.
You can download his slides as well.
(*) - I assumed it was Kirkland by the size of the staff and the fact that Harry Robinson introduced him, and I was wrong. Harry corrected me by email. :-)
I tried to post this to your blog, but I get an error message.
Peter Drucker provides deep wisdom for managers in a short book called "The Effective Executive."
An executive is someone who is responsible for the value of his own time. Most knowledge workers are (or should be) executives under Drucker's definition.
Drucker made the point forcefully that no executive will do everything well, and that many managers fail because they try to do too much "acceptably" rather than doing fewer things very well.
It has been decades since I had a job so simple that I could do all of it well, or so limited that I could find time to do all of it well. I have had to prioritize. And that includes not spending time getting better at things that I won't do.
I don't want to work on my weaknesses. I want to work on my ability to meet my strategic plan, whatever that is. Sometimes that requires fixing weaknesses, sometimes building on strength, often it requires developing a new strength in an area that wasn't seen previously as relevant.
I think it's interesting that my original post only offered two options: Work on your strengths, or work on your weaknesses. This is just two choices, a dilemma. Dr. Kaner is offering a third option; work on the things that map onto your goals (or strategy).
Jerry Weinberg calls this the "Rule of Three"; that when you feel limited to one or two options, you haven't thought things through enough.
Thanks for the insight, Dr. Kaner. Does anybody have a fourth?
Here's the deal: Software development is a creative process that you learn about as you do. That means that things change as you do them; more important than "getting it right the first time" is periodic assessment and course adjustment. That means that feedback is king.
For example, there are different ways to make soup: You can follow the directions (a prescription) or you can hire a world-class chef. If you follow the directions, what you get may be good, it may be bad, but it won't be great.
The chef isn't going to follow directions. He's going to start with something basic and flavor to taste. The college-edumacated people would call this an empirical process (one directed by feedback) instead of a controlled process.
The result of this is that if you want really great software, a predictive methodology isn't going to help much. It might be better to just put some bounds around the software (for example, a defined release process) and make it clear that like art, heavyweight methods actually stifle and hurt.
We seem to understand this idea for design; Tom Kelly has a book on it called The Art of Innovation. What bothers me is that so few people understand that, short of pressing the F5 key, all software development is a design activity. (Or, at least, if they understand it, they act as if it is not true.)
So, that's where Creative Chaos comes from. Because I specialize in software testing, I could have called it "Destructive Order", but that would just be too confusing. :-)
Since then, I gave it as a talk at the West Michigan DotNet User's Group and proposed it to the SDBestPractices Conference. The talk was accepted but couldn't make it; Carl Erickson went in my stead, developing his own material along the same lines, which he then presented himself.
My collegue Jon Kohl is going to present a talk called "Don't Drink the Cool Aid! Avoiding Process Pitfalls" in Calgary on February 4th. Read about it here.
... and I'm currently working with a few other people on a writing project that will extend the perils and pitfalls idea.
More to come.
If you aren't excited about the nUnit test automation frameworks, Test::Tutorial is an excellent alternative, if Perl-centric:
I saw Schwern and Chromatic give this OSCon in 2003 - oddly enough, it's where I met Danny Faught.
Schwern also gave this talk at YAPC 2002, and the audio is available on-line:
If you are not a perl programmer but would like to implement the test functions mike talks about in your language, I put a post about that on my (old) blog a few years back:
(Yes, a few years ago I had a blog that I stopped maintaining ... use.perl.org just doesn't seem to be the right home for this stuff.)
Saturday, November 04, 2006
James Bach recently got one, and wrote this:
If you are a skilled tester, then you know that a question like 'Can you explain to me how to perform testing of a datawarehouse and also provide me with a test plan?' cannot be answered. It's as if you asked me 'What is the mathematical equation that solves the problem I am thinking of that has something to do with data warehousing?' Nobody can answer that.
I could tell you about issues related to testing data warehouses, but I have no confidence that you would understand what I am talking about or be able to act reasonably on that information. I'm not going to hand you a test plan and anyone who tries to give you a test plan is irresponsible.Man, I think you need to learn how to test. Then you won't feel the need to ask silly questions.
Granted, I took it out of context, but this is really great stuff. It's just not just funny, it's insightful - and vice versa. Read the whole article here.
Friday, November 03, 2006
This month Esther Derby has an interesting article about software management. It's the most down-to-earth, easy-to-read thing I've found from a DoD publication in years. :-) If you are on the fence about buying her book, you could read the article. Then buy the book. It's good. Really.
At one point we started talking about personal growth. John pointed out that, as a manger, he always worked with his teams to make a personal development plan, and I strongly agreed. Moreover, he would identify the weaknesses each team member had, and ask them to develop plans for each weakness.
This is a pretty standard HR practice in theory, but many groups don't have time to get around to it, what with the busy business of production crowding in. So I was glad that John was doing something. He seemed to have the best interests of his team at heart.
Still, I took a second, paused, and said "Let's pretend I am on your team. I submit that my weaknesses are weak because I find them boring and not fun to work on. Instead of working on my weaknesses, I propose focusing on my strengths - and becoming one of the best people in the world in my specialty."
John replied that my idea was "not good enough." We debated a bit; I said that to pull it off, at least someone else on the team should be strong in my weak areas, and I should have a scheme to compensate for my weaknesses. John admitted that under those conditions, he would be willing to see less progress - but he still wants progress.
What do you think?
Thursday, November 02, 2006
The boss does not have time for this. While you spend all your time on this single project, this single problem – or a small few – a manager has ten times that many. He only has time to hear the sound bite. If that’s not bad enough, your director is worse. And the executive who is sponsoring the project? Forget about it.
You won’t even get the chance to be in the room with the executive because there are too many people clawing for his time. If you do, you only have one opportunity, and that is to give the sound bite. That’s just the way things are.
There are some people who understand this. The term “weasels” may come to mind, but for whatever reason they understand the sound bite principle, and they know how to use it. You call them weasels because they say things that fit into a sound bite, like “It’s the other guys fault” or “It’s the vendors fault”. Maybe they just say, “Everything is fine”; that is the message they want to be associated with, so they let someone else carry the bad news.
Why is this important to you? Because if you try to explain the system of effects, people’s eyes glaze over. They don’t get it. They don’t want to get it, and you are some techno-geek that needs to go back to developer row, never to be promoted again. That’s what happens, I’m sorry. If you want to advance, you have to be able to understand and master the sound bite principle. It’s that simple.
Yet as a mathematician, I know that you can’t boil down Newton’s proof of integration into two sentences. You just can not do it. In fact, most systems problems don’t fit into a sound bite either. Some things that just need to be proven.
So keep this in mind: If and when you get the opportunity to be influential on your project, all you are going to get is the sound bite. If you don’t think about what that sound bite is, someone will influence it for you. You will get out five, six, maybe ten sentences. Someone will pick your three weakest words in a row, string them together, and say, “I can’t believe you are suggesting that.” You won’t get a chance to respond. I may be exaggerating slightly for effect, but that really is how it happens in the political games, especially if you are saying something that challenges people.
Your next step
So if you are a technical contributor, if the other person’s time is valuable, if you are not going to get a lot of other opportunities, then what you’ve got to think about before the meeting is simply this: What is my sound bite going to be?
If there is one thing I can ask this executive, one thing I can ask this leader to do for me on this project, one thing that will bring it closer to completion, what is that going to be? If there is one compromise we can make on this project to make it successful, what is that compromise going to be?
Now, don’t let message be “The date is impossible”, throw up your hands and give up. If you have a message to send like that, it will make the boss feel out of control and defensive. Instead, let the boss figure that one out for himself. You can provide information, you can provide data, but leave the decision making to the manager. Your sound bite time is best used to change the status quo or the organization in a way that makes your project, your team lead, or your director more successful; complaining about the impossible date doesn’t do that.
Crafting your message
Before you go into that meeting, think about the sound bite you are going to give. There could be some really good sound bites out there.
- You could talk about the expected error percent. This sets the expectation that there are going to be errors. Not big ones, but there will be some, and what do we do about them. (Or make them realize the true cost of bullet-proof software)
- List the one task that is or feature that is the long pole in the tent; the single thing that could be offloaded or dumped to save the project.
- Ask for the number one feature; the one that needs to be done first. People don’t like to hear “Oh, we are going to be late”, but if you ask for one thing to turn in early to QA – that’s a positive. Ask for a second feature, a third, and a fourth.
At this point, the audience is hearing something positive. They want more than the sound bite; and when you deliver it, the customer will get all his high-priority features on time.
Think about your sound bite. In testing, it’s typically about risk management: What’s the status of the project, what features should we test first, what do you want us to do.
Allow the manager his sense of control
At the end of the sound bite, ask for direction. Not in a challenging or obnoxious way. Not “I can’t do everything, do you want the bugs fixed, the new features, or do we hit the date – you only get one.” Instead ask, “What do you want me to work on next?”
Let the customer decide, and maybe next time you’ll get more than a sound bite. That is what we are trying to do – make these little nuggets so valuable that we’ll be called into the office again and again. Eventually, you get the ear of the king and you get to give a little bit more than a sound bite. Over time, you learn how to relate with that individual so you can explain the system in a way they can receive it.
That way they can receive it is probably not going to be Newton’s proof of integration. It’s probably not going to be “If A Then B.” You are going to have to work with that individual to see how they communicate and then communicate in their language.
At that point, you’ve developed the relationship to the point that you can communicate with them, and you can move beyond the sound bite into a conversation. In the conversation, you can go into more depth and start talking about system effects, which was the real goal to start with.
Is this article ready for prime time? How should it change? And what publication should it be sent to? Does it scream "Better Software" or "Business 2.0"?
I covet your comments.
A) Attendance Update
B) Financial Update
D) Go Around the table
E) Discuss Evaluation Results
A) We had 80 people the 1st day (tutorial day) and 140 the second. Those numbers include about 17 speakers on the second day, a nine sponor representatives on the second day, eight volunteers, and six representatives from calvin college on both days. (So pay customers were around 70% of the attendees, more or less)
B) We did end up with a little bit of money in the bank, but this is clearly a non-profit venture. The only reason we made it work was because volunteers were unpaid. Still, now we have a small cushion for next year.
We had a 50% refund advertised on the website; we need to make it clear that the refund ends 72 hours before the conference begins.
A process for capturing the waiting list would probably have been helpful. Also, one of the tutorials was very specific about how the tables should be organized and this artificially decreased the number of attendees; we sold out without knowing it. We got lucky because a few people cancelled at the last minute and that talk wasn't very popular with the Calvin staff.
At one point we were 5K in the red. This was scary, and we made a number of compromises (cheap food, cheap snacks, small conference brochure) to minimize risk. It's good to have a "big brother" so you don't have to make such compromises. We'll have that next year.
C) It might to helpful next year to provide a packet for sponsors that give them some insight into how to sell - have a raffle, make it clear that a list is not part of the deal, help them to evaluate if sponsoring is good for you, what you can get out of it, and so on. We could try to do more for our sponsors next year, including pushing sponsoring specific _events_, like lunch or a speaker.
D) It was nice to have a small space; this made the number of sponsors feel more. Then again, physically 'case the joint' and plan the sponsor space. To get to the food, people should have to walk past the spsonsors.
It's nice to have mints at the tables and water at the tables when possible.
We need a role of 'tutorial tender', much like the track chairs do the track days. (The speaker coordinator) Find out if tutorial speakers would like to be introduced.
Schedule Volunteer dinner early - Schedule meetings further in advance
More formally define committee vs. volunteer
Consider recognizing implicit sponsors (Companies that let employees do GLSEC work during company time)
Consider giving tickets to XPWM sponsors next year
Use the survey feature in registration software to capture all attendees, emails, and company names
Order books by authors sooner (Try harder to make the book signing work)
Give Aways are fun - Encourage Sponsors to do more - Encourage Publishers
We need to firm up how we deal with sponsors who want to add more people beyond the one freebie attendee (This should be in our packet)
We need to be very about the expectations for attendance from partners
Laptops for tutorials - If laptops are required, we need to make the rediculously clear. Use follow-up features in registration software if possible, or, better yet, have the tutorial liason person do it.
When picking a date, try to coordinate with other conferences. I think fall is a pretty good niche for regional conferences. In theory we compete with PNSQC but the overlap between our conferences are minor because they are so local.
People have lives. So if we want to have evening events, they need to be advertised on the website months in advance.
Overall, we think middle of the week is better than the end. Easier to book, cheaper for travel, you can stay in town for the weekend if you want.
Make sure you can print nametags on-the-fly, just in case of typos and late registration.
More tangible, non-monetary rewards for out-of-town speakers and volenteers. Take people to a hockey game?
We like a small conference, high quality. That probably means we'll have to get some high-quality speakers either increase rates (slightly) or get more sponsor support.
Lock in keynotes and tutorials early. In general, the team should determine who they want to invite for keynotes and tutorials vs. a call for papers.
Suggest experience reports in the call for papers.
Give guidance to speakers about how to fit into 45 minutes, or limit talks to 30 minutes and push them hard. Get serious about giving presentors advice on how to give good talks.
Get serious about the peer review process - ask for an outline of what the person will do.
Try harder to get slides up on the website by 8:00AM the day after the conference.
The biggest take-home from one of our volunteers was Tim Lister's Risk Management talk. Consider bringing him back to do his Risk Management Tutorial next year.
The hickory room felt small.
E) We didn't get to this. More later ...
Dave comes from an extreme programming background, and was talking about automated unit tests. He admitted that automated unit tests are generally not sufficient to effectively test a product, but also suggested that they are a great place to start.
MaryAnn and Kristy came from more of a user-and-use-case driven perspective. They suggested documenting how the customers will use the software, and testing to verify the standard results make sense - recognizing that complete testing is impossible. (No, really, it is.)
I advocated rapid software testing and exploratory testing.
It was great fun, but after a few minutes I realized that we weren't learning a whole lot from each other. We each had interesting things to say, but we had made up our minds and weren't changing them - at least not much.
Enter the epiphany.
Let's step back for a moment. Developer-facing tests are a way to find information about the product under development. A very-rapid feedback way, they tell the developer if the software does what he or she expects.
Sometimes, what the developer expects is different than what the customer expects. This is a different kind of defect. The use-case driven testing is a way of testing that is often better at uncovering this kind of defect than developer-facing tests.
Negative testing, "what happens if I make a typo", quick tests, and way-out-there yet-legitimate-value testing ... those things are often best done through exploratory and rapid methods.
All three of those are ways to learn about the product under test. All three of them have strengths and weaknesses. How much of each one that I use will vary based on the product, the customer, the projects, the team, and so on. I reserve the right to use more or less of those types of testing (or other methods like security or performance testing) based on what makes sense in the moment.
It's not a question of yes or no, "the right" way to view testing verses "the wrong" way to do testing. As a testing craftsperson (artist?) I have a palette to choose from, and I mix primary colors to make more interesting ones.
This can lead me to some interesting conclusions, for example, that differentiating white-and-black box testing isn't always a great idea.
Just like the four points of view in our discussion after lunch, diversity in test strategy can be helpful. When presented with a different point of view about testing, I am often tempted to shout it down. Next time, I'll try harder to listen.
Wednesday, November 01, 2006
Test Driven Development: Introduced
A 1-to-1.5 day tutorial and workshop
1 hour – Introduction to loops, variables, and subroutines
(Optional, over lunch?)
½ Day – Introduction to perl with exercises
½ Day - Introduction to TDD
½ Day – Hands-on TDD
Take-Away: Learning Perl (Wall)
‘Agile’ Testing: Demystified
Alt Title: Introduction to Agile Testing
Alt Title 2: Introduction to Agile Testing Practices
A ½ to 1 day tutorial/workshop
The Agile Manifesto has swept the software development community, but what does it mean for software testing? No, really, how can we apply it? In this tutorial Matt Heusser will explain the software agile manifesto in three ways: With words, examples, and participative effort, then step back and discuss and do some agile software testing.
This workshop intends to answer the following questions:
What the heck is this agile thing?
Would this agile thing be helpful to my organization?
If yes, how much?
How can I influence my organization to be more agile?
After all, “I am just a
How does the tester role fit into this agile thing?
What are the limits of developer-facing testing, and how is black box testing different? How and should black-box testing “compete” against developer-facing testing?
What should I do on Monday?
A 1-to-2 day tutorial
½ day – Software Management
½ day – Hiring the best
½ day – Solving the requirements problem
½ day – Software Scheduling Secrets
Take-Away: Behind Closed Doors: Secrets of Great Management
Test Case Design
A 1 day tutorial
Take-Away: A practitioner’s guide to software test design (Copeland)
Black Box Software Testing
A 1-2 day course that discusses challenges and approaches to software testing
What is software testing?
-> Applied Critical thinking
-> Can happen throughout the lifecycle
-> White Vs. Black-Box
Challenges of Software Testing
Risk-based testing approaches and Rapid Testing Approaches
Requirements-based testing approaches
Automated System Tests
Automated Acceptance Tests
How To Break Software: A Classic Approach
A 1-2 day course that introduces critical and creative thinking
- Career Paths
- Test Cases for a salt shaker
A history of innovation
- Da Vinci, Gauss, Netwon, Asimov, Fenyman, Weinberg, Van Neuman, Turing, Goedel, Escher
Logic and Rational Thought
- Truth Tables, predicate logic, logical fallacy
- Aristotle, Augustine, Aquinas, Ayn Rand, …
- Classical, Baroque, Enlightenment, Romance, Modern, Post-Modern
- Relativism, Epistemology
Exercise: Define where CMM, Waterfall, XP, RUP, fall on the continuum
- Plato’s Cave
Exercise: Find examples in the world of software testing
- Deduction Vs Induction
- Classic Proofs (Limits Problems, Pattern Recognition, Fibonacci)
- Relationship of Acceleration to Velocity to Distance
- Newton’s Proof of Integration
- Group Problem Solving
The world system
- 90% of everything
- Zen&The Art of MM? (Drucker, Deming, Juran)
- Applied thinking about the world system
- Test Cases for a stapler
Economics for Software Testers
Alt Title: “The State of the Yard-Stick”
Alt Title II: “My next thirty years”
Falling into software testing is easy. The next question “where do I go next?” is a little bit harder. In this fast-paced talk, Matt Heusser covers three hundred years of economic development - from Adam Smith to offshore testing. Including economic models and real data, Matt will use existing trends to discuss where software testing could be heading, and how to profit from it.
The one-eyed man in the land of the blind:
Negotiation skills for software testers
Like Santa Clause and the Easter Bunny, the “Software Tester who knows how to negotiate” may seem like a fairy tale. The sad reality is that negotiation is everywhere, from scope to schedule to salary, yet few software testers practice this art … or even understand it. In this talk, Matt Heusser discusses who a negotiation is, introduces goals and strategy in negotiation, then goes on to cover tactics and skills. Finally, Matt provides opportunities and examples for participants to practice the skills in an environment without career-limiting consequences.
‘Agile’ Testing: Demystified
Alt Title: Introduction to Agile Testing
Alt Title 2: Introduction to Agile Testing Practices
A 1 to 2 hour presentation
This talk intends to answer the following questions:
What the heck is this agile thing?
Would this agile thing be helpful to my organization?
If yes, how much?
How can I influence my organization to be more agile?
After all, “I am just a
How does the tester role fit into this agile thing?
What should I do on Monday?
Individuals and Interactions
… over processes and tools, reads the agile manifesto. Yet XP, Scrum, Crystal, ANT, Cruise Control – typical things to talk about in a discussion of agile practices – are all processes and tools! It turns out that focusing on individuals and interactions can often be hard to do and harder to define. Matt Heusser suggests the focusing on process at the expense of people creates opportunity cost, which could mean that real, working features are cut or late because of the wrong focus. He goes on to provides specific examples of how to shift your focus, practical exercises to help you stay there, and draws a picture of a few alternate ways that an organization might embrace individuals and interactions without giving up accountability.
A few of my favorite things:
30 Years of the best innovations in software engineering
One of the main causes of struggle in the technology industry is its perpetual youth - one generation is burning out while a new one is just starting. This means that each new group needs to learn the mistakes of the past, most often by doing them! In this brief tutorial Matt Heusser covers the landscape of thirty years of professional software engineering – where we came from, what we’ve learned, and how we can apply it. Matt suggests that each team needs to customize its development processes with an eye on the past, in a way that is, in his words, “Often right, occasionally wrong, but never boring.”
My paints and brush:
A testing artisan’s toolkit
Many software testers expect the employer to provide the paints and brushes for our artistic work. When the result is of limited quality, we complain about the lack time; not the paint-by-number set we were handed. Matt Heusser suggests that testing craftspeople need to take our tools seriously – to the point of owning tools that we personally carry from job to job. In this talk he covers a few of his favorites, including electronics, simple supplies, and software; many of it free, and all of it combined cheaper than a typical conference registration. Finally, Matt covers a few of the consequences of viewing ourselves as independent craftspeople with our own tool set, and makes some recommendations for Monday.
Zen (zn) n. (Dictionary.Com)
A school of Mahayana Buddhism that asserts that enlightenment can be attained through meditation, self-contemplation, and intuition rather than through faith and devotion …
Giving talks is hard. Giving talks can be scary. This presentation won’t make you a master of ceremonies or a professional comedian, but it might just help you increase you effectiveness as a speaker and build a little confidence along the way.
Context-Driven Software Engineering
Matt Heusser suggests that focusing on individual and interactions over processes and tools means just that. In this talk, he will discuss techniques to build skills and flex the process in the moment; to determine the best thing to do right now instead of consulting a manual. Matt will discuss the concept of Context-Driven Software Testing and introduce it’s twin in development – Context-Driven SE.
Keeping your gumption
The true variance in software development isn’t the hourly rate; it’s the amount of work that you can get done within that hour. Inspired and based on Zen and the Art of Motorcycle Maintenance, this talk discusses real, practical techniques to keep going despite adversity – and how to avoid that adversity in the first place.
Effective Bug Reporting
If you’re sick and tired of “It works on my machine”, “Unable to reproduce”, “Referred back to the testing group for research” and “This is not a bug”, you’ll want to attend this session. Matt Heusser discusses how to write bug reports that get noticed, communicate better with developers and managers, and improve your written communication skills – all at the same time. Along the way he’ll uncover a few reporting tips, tricks, and tools you can start using tomorrow.
Why Agile Principles Work
Agile principles aren’t “magic” and can’t create something from nothing. In this talk Matthew Heusser, a classically-trained mathematician, will weave together examples from physics, operations research, control theory, general business, math and software to help move the “why” of agile development away from alchemy and toward science and logical thought.
[Indirectly inspired by my talk “Magic Pixie Dust” which was a keynote at the Indiana QA Conference in 2005.]
Lightning Talks are fifteen five-minute talks in a ninety-minute time period. Come here a series of ideas all in a row - without the bluster, opening-joke, intro/body/conclusion, or Q&A session that doesn't really answer your questions. With Lightning Talks, the speakers have just enough time to make one point, make it well, and get off the stage. Leave with a half dozen new ideas to implement on Monday ... and if a speaker is bad, don't worry, he'll be off stage quickly.
Rethinking Process Improvement
Weaving together examples from general business, operations research, control theory and systems thinking, Matt Heusser suggests two areas of innovation: Process and Product. According to Heusser, companies can choose which, and how much, of these two to focus on, and he provides success and failure examples of both. Finally, Matt provides a new way to think about process improvement, grounded in the context of your company, its products and competitive environment.