Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Friday, February 23, 2007

Project Scheduling Formulas

James Bach is running an on-line class for his Rapid Software Testing Course, that I am finding quite enjoyable. More fun than the lectures (which are good) is what I'm learning from the offline forum software he is using.

Recently, someone made a post about project scheduling formulas; I enjoyed it so much that I wrote up a rather long reply, which I wanted to share here:

Hello John. I have heard that formula as well, very recently, from people in the PMI/PMP world - people I respect who consistently have good results.

However, I don't think it's magic. If we take a minute to deconstruct this ...

If a = most optimistic estimate
b = reasonable estimate
c = pessimistic estimate

Then use give an estimate of (a + 4b + c) / 6

Then A is what many projects start with. In "Waltzing With Bears" Demarco and Lister assert that technical folks are really good at coming up with the "Nano Percent Date" - the date which the project could be done if everything goes perfectly - which has a nano-percent chance of happening.

Just moving the conversation past A is helpful.

I've heard different descriptions of C, but it usually comes out to about twice B, which is usually about twice A. In other Words:

C = B*2
B = A*2

Estimate =

(A + (2*4*A) + (2*2*A) )/ 6 =

(A + 8A + 4A) /6 =

13A / 6 =

A*2.166

:-)

In other words, this isn't much more than "Take your original estimate and double it", but it's dressed up in psuedo-science. :-)

----> Now, my less sarcastic answer:

Actually, it's a little more valuable than that. You'll notice that you've got A, 4B, and C - which total six "things" - divided by six. That's a weighted average - weighted strongly towards B. However, C is the gotcha - C is where you weigh in what happens if the entire development team is killed in a hurricane and the team needs to be rebuilt from scratch. That number is usually large enough to drag the average toward the large side. This gives us our 16% buffer past just "2". (Remember, we came out to 2.166.)

Also, there are some projects where C just isn't that large, because you are doing something that is well-defined with commonplace technologies. Those types of projects often have low return-on-investment, but also low risk. In that case, using A, B, and C might be more helpful than just multiplying by a number.

When I run projects, I have two dates - a commitment date back to the business and a goal date for the team. To calculate the difference, I try to take 'most realistic estimate' in one corner and add a buffer for risk - the larger the risk, the bigger the buffer.

So my estimate math is:

Estimate = Realistic * X

Where X is something between 1 (zero risk, I'll have your essay written today, no problem) and 2. X is my risk factor; it's different on every project.

If X is greater than two, I usually suggest an architectural spike to make realistic more realistic, or some change in scope.

Now, I am not accusing you of this, but I have seen people use scheduling formulas without understanding them, and the results, though tragic, are predictable. Actually, to be honest, I find asking the "why" of the formula is far more enjoyable than plugging in numbers ...

Wednesday, February 21, 2007

Conference Presentation Idea ...

Secrets of Enterprise Development
What can we learn about Project Scheduling from Mr. Scott? Security from Mr. Worf? Architecture from Data, Engineering from Geordi, Analysis from Spock, and Leadership from Kirk? Borrowing from human psycology and using Star Trek as a backdrop, Matt Heusser will cover the mythology of software development, and what happens when you involve real people, with all the faults and foibles they actually have.

No, seriously, this might make a decent talk. Gene Rodenberry, the creator of Star Trek, often admitted that the show was half social commentary. The other shows at the time, like GunSmoke and Bonanza, were historical. In those other shows, you couldn't show a female as a cowboy or an asian as a business owner - nor could you show the dark side of racism. But in Star Trek, to show racism, you could make the other race an alien, or use a racially diverse cast to show how essentially human and similar we all are when compared to the pandorian wampus-beast.

Also, I think the term "Enterprise Development" is vaguely lame. It usually means "Software Development in a big, dumb, slow company that is producing software a cost center, not an investment or profit center." So, in the talk, I could address some of the weaknesses of "enterprisy" development with humor, which is about the only way to do it.

The big problem is that this isn't a conference talk - it's a lightning talk. It's five minutes of real material, and then some serious fluffage. So I need real examples from Trek beyond Scotty and the Kobayashi Maru.

So, I have a few, but I don't want to spoil them. What are your ideas?

Monday, February 19, 2007

Teaching 'Agile' Testing

Elisabeth Hendrickson recently posted a request for feedback on teaching agile testing.

I liked the question so much that I wrote up a long reply; so long, in fact, that goes better as a blog post than a blog reply.

The tone is even more informal than a typical blog entry for me, but I hope you find it interesting. My goal was to get the ideas down, so I go pretty quickly. If you'd like to hear more detail ("What is the minefield problem in test automation", or "How can I automate a use case", or such) - just comment.

So, without futher ado ...

I am currently finishing up a couple of courses for software developers about testing. I suppose you could call it 'Agile'; I like the term "light-weight methods" or feedback driven or whatever.

I also struggled with how to cover the material. Several of the students (and some of the management) wanted me to dive right into a specific framework. "Teach blahUnit" came the request.

Whatever, dude. You can't automate what you can't do manually.

So yes, we started with equivalence classes, bounds, and traditional requirements-y techniques - stuff you could get from Dr Kaner's BBST course. I also covered scenario testing and use-case driven "acceptance" testing.

Then we did an exercise. I split the class into three groups - the first did entirely document-driven, requirements-based testing. They had to write scripts for the test cases before those were executed, and group one could only execute those tests.

The second group also did scripted, document-driven testing on the same app, but I gave them both the requirements and then demoed the UI. This way, the group could develop the scripted tests with the user interface in mind.

The third group had the requirements and the demo, but did exploratory testing.

After the exercise, I asked every team member to count how many bugs they found - down to root cause. I averaged this per team. I also asked the teams to evaluate how much fun they had - on a scale from 1 to 10, with 1 being "I'd rather have teeth pulled", 5 is "Well, at least I'm getting paid", and 10 is "I want to do this for a living"

Without exception (and I've done this twice now), the first group hated it and found few bugs, the second group found it merely distasteful and found more bugs, and the third group slightly enjoyed it and found the most bugs.

After that, I explain the mine field problem of test automation, the use case-driven view, the ripple effect and the value of test automation to increase confidence in the ripple. Finally, I cover high-volume test automation.

We try to figure out which of the three kinds of test automation make sense where, then explore those with the frameworks that make sense for that team.

Finally, we swing back around to try to form a comprehensive view of exploratory testing, acceptance testing, and test automation.

My take on it is that you can't automate what you can't do manually, and if you automate what you do crappily, you will get bad tests that are cheap to run – but expensive to write.

So I'd make it a two-day class, cover a valid testing worldview that is compatible with agile the first day, and then do all the 'agile' stuff (xUnit, continuous integration, TDD, fitness-y, and so on) on the second day.

I’m on the fence about interaction-based testing. Like a lot of other things (Agile, Lean, TDD) it’s easy to misunderstand, think you are doing it right, but actually waste a lot of time with little benefit while getting code bloat. Specifically, one of the original papers on interaction-based testing had an example that, I believe, sent people writing the wrong kind of tests. Then again, on certain systems, done right, it can increase quality, readability, and time to market. (Just like Agile, Lead, and TDD)

For database systems, I teach stubs (stub out data) not mocks. (fake out behavior)

But that's just me talkin'. In my old age, I find less and less interest in tools and more in skills. It sounds like your class covers agile skills more, so please, Elisabeth, tell us more.

Thursday, February 15, 2007

SideBar

...(Administration) covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, guided; men are seldom restrained from acting, such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, extinguishes, and stupefies a people, till each nation is reduced to be nothing better than a flock of timid and industrious animals, of which government is the shepherd.
- Alexis de Tocqueville,Democracy In America

Substitute "Administration" with "process" and "government" with "management", and you have the sickness in "enterprise" development today.

Lessons from Math - II

What makes a good mathematician?

I suspect that a common response would be a strong intellect, but another common response would be a great deal of self-discipline. After all, a mathmetician has to spurn all worldly pursuits in favor of a a drudging existence, sitting at a table, laboring with numbers. Why, the mathematician must be ever mindful of the goal - the solution - or else he would never have the discipline to finish all that labor.

Rubbish. Poppycock. Bull-Pucky.

The military defines self-discpline as doing something not because you want to, but because it needs to be done. If that's true, then a good mathmatician doesn't need it. In fact, self-discipline could decrease his effectiveness.

Good Mathematicians just love to solve problems. They don't have to force themselves to solve the problem, and they don't do it because it "needs to be done to pay the rent."

No, good mathematicians love math. There are a variety of reasons they love math, which I may explore later, but, for the most part, they consider math 'fun.'

In the pursuit of that, mathematicians have been known to forgo sleep, showers, food, laundry, changing clothes, paying the rent ... all those important 'self-discipline'-y things.

Now, a personality like that is so rare and odd that the mainstream world just can't understand it - so people invent stories that explain the behavior ... sort of.

So what does a good mathematician need?

1) Yes, intellect, but more importantly, an ability to create and manipulate abstract concepts in your head that are not tangible. A mechanical engineer can at least draw pictures of what he is building; a pure mathematician just gets symbols.

2) Problem Solving ability and curiosity. They have to wonder 'what are the odds' at blackjack, or roulette, or of getting cancer, or the next federal election.

3) A good mathmematician must love learning. They must love it. Because learning gives us the tools to figure out the answers to the questions.

Occasionally, someone from the testing community will throw me a challenge. A few months ago, James Bach threw me one that required either an understanding of exponential limits problem, or the ability to estimate it with monte carlo simulation.

People without curiousity will simply give up too easily. Those with curiosity, but without an addiction to learning would not know how to figure out the problem, and would eventually give up as well. Those without problem solving ability, well ... they are just hopeless.

How is testing different than math?

To start with, let's differentiate math from Computer Science. In In Things A Computer Scientist Rarely Talks About, Don Knuth said that the Mathematician and the Computer Scientist both need the ability to work symbolically and at various levels of abstraction. They both need the ability to walk up and down the levels of abstraction very quickly; something that traditional building architects and construction workers might not need to do as much. The Mathmetician seeks one single unifying law - one theory - that pulls the entire solution together.

The Computer Scientist, on the other hand, just writes a SWITCH or CASE statement and moves on. In other words, the Computer Scientist has to be better at dealing with inconsistency.

What makes testing different?

The tester has to model the software, but he never really knows if the model is right. The testers job is to figure out how close the customer's model is to what the software software actually does. He derives this from very different input sources - the team members, the spec, the requirements, the design, and, of course, the code itself.

The tester never has enough information and never has enough time. He has to choose some small sub-set of the functionality to run to get some approximate understanding of the quality.

At least, that's my first blush. What do you think?

More to come.

Wednesday, February 14, 2007

Lessons from Math - I

Yesterday, I suggested a topic of "Testing Lessons from Mathematics."

Yet in my recent work, I'm becoming much more holistic about software.

If I find a particularly interesting point that applies to requirements, do I throw it out? Probably not, so, I've re-titled the series. I will call it lessons from math and we'll see where it leads.

Math, like most other disciplines, has it's own lingo; please forgive me for introducing new words, but I'm afriad I will have to in order to advance the discussion.

Today I would like to talk about two math lingo words: Prima Facia Evidence, and Axiomatic Evidence.

Prima Facie evidence means a statement that is true on it's face. Something like 1+1=2 is taken in math as Prima Facie. The idea that when we are building software for someone else, that person should describe what we are building before we build it is prima facie - it just seems obvious.

An Axiom is a little bit different. Axioms are the basis for formal systems. It may not be obvious that these things are true, but we assume they are true for the purpose of working in the system. For example, Euclid assumes that two parallel lines never intersect. You may remember from high school that Euclid doesn't actually ever prove this, and neither does anyone else. Yet there is an incedibly large amount of higher geometry based on this, and it seems to work for constructing bridges and so on.

Now, Wikipedia and I disgree a bit on these two terms. According to wikipedia, Prima Facie is evidence in a legal sense, while an Axiom is a logic that is self-evident.

Now, here's the trick:

In higher math, you do a lot of proofs. Lots. There are several ways to solve them, but the simplest is proof by deduction. To do this, you express an idea, then simplify and simplify that idea until you reach an axiom, and stop. Basically, with proof by deduction, you say "If this axiom holds true, then my assertion must be true."

Example: X^2+4X = - 4
If that is true, then this is also true: X^2+4X+4 = 0
If that is true, then this is also true: (X+2)(X+2) = 0
We know that one of those X+2's must be zero, becase the only number times another number that is zero is zero
So X+2 = 0
So X=-2

Yet each step in the process required a transformation that is an axiom. "You can subtract a number from both sides of the equation and it is still true", for example, is an axiom.

It turns out that you can find lots and lots of interesting things by assuming the axiom is not true and working backwards. If you find a contradiction, that reinforces the idea that the axiom is true.

But, sometimes, you DON'T find a contradiction. For example, what would the world be like if parallel lines did intersect?

There is an entire school of math based on this called non-euclidean geometry. It isn't even a rat-hole; it turns out that parallel lines DO intersect on the surface of a sphere - at the poles. And this is really helpful for mapping, say, for example, the world we live in.

So what am I saying here? There is a difference between ideas that are prima facie (flippin' obvious) and axiomatic - things that we have to assume are obvious in order to build.

A great deal of the software testing world believes that if we sacrifice everything to the world of repeatability, we will be successful. In the factory school and the CMMI, repeatability is held up as the goal.

That's not prima facie. It's axiomatic. And, while we can build very large castles in the sky by assuming repeatability is the key goal, there is another way to grow: Ask ourselves what we could do if repeatability was not the goal.

Non-euclidean geometry is cool, sure. But I'm more interested in changing repeatability in testing from the express goal to sacrifice all effort to.

I submit that the real goal in software testing is to provide accurate information to decision makers about the software under evaluation - as quickly as possible.

In that case, repeatability is different - it is something that might be helpful for some of the time.

That allows us to branch out with different techniques that are not repeatable, but just might be reliable.

More to come.

--heusser
(*) - Thanks, Sean, for starting me down this road.

Monday, February 12, 2007

Testing Lessons from Mathematics - Intro

You may not know it, but there are a whole bunch of things that are lumped into Math, somewhat haphazardly. Statistics, for example, could fit well in the business school or the psychology department. Game Theory is another one that mystified me; in my mind, it fit better in the economics department. John Nash, who co-invented game theory and was the subject of "A Beautiful Mind", actually earned the nobel prize in Economics.

None of that makes much sense unless you describe Math in this way: "Mathematics is what mathematicians do." Then, pretty much all of the what I saw as a Math student at Salisbury University seems to mince together in a vaguely holistic way. From Dr. Austin's Number Theory to Dr Shannon's Scientific Computing, they all sort of ... fit.

Like Math, software testing is hard to define on the spectrum. For example, you can't go out and earn a Bachelor's Degree in the subject. If you could, what department would you earn it in? Computer Science? Business? Prob and Stats? As I see it, there is little agreement on where to place software testing, or what conceptual framework to put it in. Here are a few of the worldviews I run into:

Software Testing is a branch of Computer Science. Perhaps typified by Boris Beizer, this view of testing involves taking the requirements and developing graphs of flow controls, then creating test cases for each flow. I have heard this time and time again in "Intro to testing courses", but I find it nearly impossible to actually test this way. Beyond a simple dialog with a half-dozen controls, the finite state machine is just too big. Then, again, this might work just fine for testing embedded systems.

Software Testing is a branch of Cognitive Psychology. According to Wikipedia, Cognitive Psychology is "The school of psychology that examines internal mental processes such as problem solving, memory, and language." In this view, software testing is actually a very complex problem - You generally have an infinite number of combinations to test in a limited period of time. There are a huge number of challenges in testing, but finding the important things to test and assessing the status of the software under test are two of the larger ones. Believe it or not, Dr. Cem Kaner, author of several books on software testing and a leader in the context-driven school actually has his PhD in Experimental Psychology.

There's more to it than that, of course. The Psychology department of your local college also studies design of experiments, culture, how tools are used, how people learn ... all of which can apply to software testing.

Software Testing is applied philosophy. Testing asks questions like "Do you believe the software works? To what extent do you believe it? What does 'works' mean anyway?" Epistemology is philosophy of belief and knowledge. James Bach calls his company, Satisfice, "Epistemology for the rest of us."

Software Testing is a branch of scientific management. While I have never heard anyone say this out loud, there is a significant group of testers commonly referred to as the "factory school." Under this view, untested software is a work product that needs to go through a quality control step, but re-inspected and re-worked until found fit for use. The factory analogy suggests that testing techniques, applied correctly, can be predictable as long as they are applied in a consistent and standardized way.

Brett Pettichord, along with others, have divided these into at least Four Schools of Software Testing. The ideas above align roughly with these schools. The Cognitive and Philosophy groups probably combine to form the context-driven school. The computer science view aligns with the analytical school. Scientific Management is a drop-in for the factory school. I don't have a 'view' that fits with the quality school that really adds anything, so I've left it out.

This begs the question: Ok, Matt, what is software testing a branch of?

Actually, I thought of several short, psuedo-insightful answers to this question. Here's are three:

Software Testing is a branch of fun-ness. This is the wierdo school of software testing. It derives from toddlers, who often enjoy breaking things more than they enjoy building them.

Software Testing is actually a branch of physics. Believe it or not, wikipedia defines physics as "the science concerned with the discovery and understanding of the fundamental laws which govern matter, energy, space, and time. Physics deals with the elementary constituents of the universe and their interactions, as well as the analysis of systems best understood in terms of these fundamental principles." That actually seems to work. I specifically remember my high school physics teacher, Mr. Johnson, saying something like "The great thing about physics is not that you learn how to calculate the speed of a bullet in flight; it's that you teach your brain to solve complex problems." When I think about how we solved those problems: Creating a model, taking the model and developing a strategy, and applying that strategy and looking for feedback, it actually makes a little bit of sense.

Software Testing is applied systems thinking. This one actually seems to work, except that systems thinking isn't really a "thing", it is a way to solve problems in disciplines. This leaves us with this definition...


Software Testing is what software testers do

On first blush, I find this definition amazingly unsatisfying and vapid. I hate it, and would be careful to only use at the end of the conversation - after we have covered a lot of painful ground and built some shared understanding. I would not use it to some fake view of agreement on what software testing is; instead, I would use it to point out our lack of agreement. Here's my tiny little insight to add to it:

Because the world of testing is so immature, we can pull our ideas from anywhere. The value of our idea, then, is not judged by where it came from, but what it adds to our knowledge and ability.

About ten years ago I completed a traditional programme of instruction in mathematics, with a concentration in computer science. In other words, I am a classically trained mathematician. (I can prove this, because I spelled programme with an "e" at the end. It's more impressive that way.)

Does the world of math provide insight into testing? Maybe. At least, I certainly hope so; that is what this series will try to explore.

Friday, February 09, 2007

"Getting the Requirements Right Up Front"

Brian Marick posted this to the Agile-Testing Yahoo Group Today. I thought it was interesting ...


1. It's the job of the team to be able to respond well to requirements they didn't anticipate. That will allow the business to adapt more quickly to a business environment that won't stand still.

2. Something like 50 years of trying has demonstrated that mere mortals are not very good at designing a system that can accept unanticipated requirements.

3. The alternative that seems to work - the alternative the Agile methods take - is that both the team and the code have to be trained, over time, to accept the unexpected with aplomb.

4. The way you get trained is through practice. Therefore, taking a lot of time to get the requirements right is actually a disservice to the project - it will make them less effective when it really matters.

5. For this reason, a lot of teams make a point of not anticipating the requirements they know are coming. By treating them as unexpected, they speed up their own training.


You can read the entire post here.

Rethinking Process Improvement - IV

Yesterday I suggested that a lot of process improvement is trying to eliminate the overlap between roles. For example, when people talk about making job descriptions "better", that is often what they mean.

Each team draws back, clearly defines themselves and what services they offer.
Here's one example of what that might look like:



… So, who’s doing the work in the middle? (No one)

What does this mean on projects? (Things fall through the cracks)

Can we really call that 'improvement'?

Thursday, February 08, 2007

Blogger Etiquette?

Someone mirrored one of my posts from earlier this week -

http://managerspeaks.blogspot.com/2007/02/rethinking-process-improvement.html

At the bottom, he wrote "link", linking to Creative Chaos. Otherwise, I would think the material was his.

Now, when I quote someone, I quote part of the article (not in it's entirety), I put it in italics, and I make a descriptive link, like "From James Bach presentation on bug metrics"

I suppose it's neat if people are copying me, but I am a little alarmed at the way this was done - it's not too much different than the google link trollers that spider websites to create content. What do you think?

Rethinking Process Improvement - III



If software development is an assembly line, then unclear roles is a real problem (see illustration.) You don't know who is supposed to tightnen the nut. It might be tightend twice, it might be tightened once, but one thing is certain: The variation in the tightening will slow the line down. So one of the ideas in traditional process improvement is to clarify roles and responsibilities, job descriptions, and so on. (Or, in other words, to "decrease the variability in the process", a line right out of the PI literature.)

I am a bit dubious of that position, but we'll get to the why tomorrow. In the mean time, what do you think?

Wednesday, February 07, 2007

A Box Of A Different Color

Better Software Magazine published by artilce "A Box of a Different Color" this month

If you like this blog, you would probably enjoy Better Software. They have a free trial subscription, or you can get a free year's subscription when you attend an SQE conference such as STAREast.

If you just want a freebie, you can read the article here.

Tuesday, February 06, 2007

Rethinking Process Improvement - II



This is image 2 from Winston Royce’s Paper – "Managing the Development of Large Software Systems"

Let’s look at each stage for the process – requirements, design, coding, testing … it’s an assembly line. At each step, someone produces a work product which is handed off to the next person in the line. Only the project manager owns the entire process. In fact, some of our very own job descriptions, like developer, or tester, or requirements analyst, assume that our entire role is to perform one box in the sequence.

Later in the paper, Royce went on to write:


"I believe in the concepts, but the implementation described is risky and invites failure. The problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, I/o transfers, and so on are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. … Yet if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. A simple … patch or redo of some isolated code will not fix these kinds of difficulties. The require design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violate. Either the requirements must be modified, or a substantial change in the design is required. In effect, the development process has returned to the origin and one can expect up to a 100-percent overrun in schedule and costs."


Royce does not use the term "waterfall" in his original paper. His final recommendation involves writing the program twice; admitting that the first iteration will be problematic, accelerating it, and finding the deficiencies in the requirements and then starting over with design. The term probably came out of the arrows, which kind of look like a waterfall.

Yet too many people never got past page one of his paper. After all, this is familiar; it’s applying Fred Taylor to software. And Taylor’s a genius, right?

It turns out that Taylor’s entire body of work is based on experiments with Immigrant day laborers who had to move raw iron from the factory to box cars. The typical object of that work had a third grade education, which was typically in German.

Software development is not turning a wrench – it is intellectual work – knowledge work – that is different every time. When we do heavyweight handoffs, we lose information. We say "We really need to get the requirements right next time."

Now, one definition of insanity is doing the same thing over and over again and expecting a different result. For thirty years, Process Improvement (in capital letters) has been telling us to get the requirements right up front. And we’ve been failing, for thirty years. That is insane.

Monday, February 05, 2007

Solid presentation advice ...

Suggestions and Examples of What Not to Submit

1. Attendees are paying to take classes—they don’t want to hear a sales pitch, no matter how thickly veiled. Please do not submit classes that feature your product or describe problems that happen to be solved by your products.

Attendees and the conference organizers are equally skeptical of technical talks proposed by marketing people, Business Development Managers or CEOs—unless that person has the appropriate technical background and experience. Even the appearance of a sales pitch, such as when a talk is given by a marketer, will cause people to not attend that class.

2. Attendees don’t need to be taught why something is important (why security testing is important, why testing early in the SDLC is important, etc….). If they sign up for the class, they already know that. In your abstract, explain how the attendees will learn HOW to do something by taking your class.

3. Read the Course Catalog from the previous conference to learn how the classes are described. Try to match that, and have a colleague review your submission to make sure that your abstract makes sense. Experience has shown that an incoherent abstract is not a good sign for a coherent presentation.


- From the Software Test&Performance Call for Speakers

If you run a conference, especially a regional or non-profit one, you might want to slap something like this right onto your call for speakers. After running a regional software conference and participating in a few more, I've seen pretty much all of these problems. Heck, I've seen them in Lightning Talks.

Sometimes it can be very hard for a program chair to know who to select and who not to. The best thing we can do to help is to give them solid presentations, a strong outline, and no fluff.

Rethinking Process Improvement - I

Most of our ideas about process improvement come from a factory analogy – which was Invented by Frederick W. Taylor at the beginning of the 20th century. His idea was that you separate the work into independent roles, then each person owns one single step of the process. It is the role of management to own the process; people are simply a cog in a great big factory.

And, for 1911, it was pretty good. It made Ford a billionaire, and that’s nothing to sneeze at.

Now, think about the assembly line. More later ...

Thursday, February 01, 2007

Blue Man Group - I

First, some background. I submit that there are currently two very big extremes in the world of software conferences: Death by Powerpoint and Open Spaces.

The Death By PowerPoint Conferences involve going to see eight speakers a day read a list of bullet points off of slides. Now and again you'll see a metric. The outline of nearly every single presentation is usually something like this:

Opening Joke (Optional)
Intro - Thesis
Body
- Point A (Support 1, 2)
- Point B (Support 1, 2)
- Point C (Support 1, 2)
Conclusion
A, B, C, therefore - Thesis

It turns out that this is a terrible way to convey information. First, it's redundant - you end up making your point three times. Second, it's lossy - PowerPoint bullets are inherently terse. Third, it's a waste of your time. You would be better off reading a three-page paper that described the idea, or just reading the slides. At least reading the slides would only take ten minutes, instead of an entire hour.

Ugh.

The other extreme, Open Spaces, is an emergent ideal. You have a poster with rooms and times, people write down what they are interested in, and you show up and discuss. In Open Spaces, everyone takes on some responsibility to interact. This is better in that people are actually talking about things they are interested in, but it has a few challenges. The lecture model is familiar and easy for many people; Open Spaces make them uncomfortable. And while open spaces may work as a form of communal-learning, they ain't great for instructor-based training.

What could we learn about presentations from Blue Man Group?

Just about everything.

First off, Blue Man Group is three guys dressed in black, with every inch of exposed skin covered in some weird blue coating. They don't talk, but instead use gestures to convey meaning - successfully. Of the three, one is the leader, one is the curious one, and the third is the "big jerk." Despite the fact that the look identical, ten minutes into the act you can tell them apart by expression and behavior.

Then there is audience involvement. Not interaction - involvement. While you wait for the show to start, a flashing LED sign tells you the VIPs who are present, and asks you to wish various people "Happy Birthday." The staff hands out bits of headband-sized recycled paper, asking you to turn them into some kind of jewelry. (I wrapped mine around my arm; most people wrapped them around the head.)

During the show, the blue men turned all kinds of things (mostly PVC pipes) into percussion instruments, sprayed paint onto drums, splashed it everywhere and created art, had a backup band that played music, had various sets and skits that included video, and managed to make a point or two about the environment.

Whew. I haven't even scratched the surface.

The Blue Men brought an audience member on-stage, did Improvisational Comedy with her, then brought a second volunteer up, had him put on a poncho, tied him upside down, threw paint on him, then threw him against a piece of canvas to make art.

And it was all in good fun.

Ok. So what we have here is a simple message (like protect the environment) covered in art, music, video, and comedy. There was a simple script that the blue men could deviate from in the name of humor. Most of the events were scripted, but they came off as improv - like when someone came in late and the blue men shined a theatre light on them. That probably happens in every second show, but it sure felt improvisational.

So what should software training be?

-> Not open spaces and not a lecture model. Something in the middle, that involves more senses than just the hearing. We need to give examples, provide exercises, and let the audience steer the work. More than being told how to gather requirements, our audience needs to experience and feel it for themselves. This feeling of having done it reinforces training in a way that no metrics ever could.

In my book, training should be fun, meaningful, and leave you with some idea of what to do on Monday. Blue Man Group is theatre, it is entertainment - but I think it gave a better combination of meaning and what to do than many software talks.

If a SW talk isn't fun and meaningful, no one is going to remember anything else about it.

This doesn't feel done, bit it's late. What do you think?

Non-Functional Testing?

There's an interesting discussion going on at the Association For Software Testing discussion list on non-functional testing. Basically, Danny Faught thinks the term is weak and is looking for alternatives. I happen to agree; here's my reply:

Danny Faught Wrote:
>Yes, "non-functional" is a rather bizarre term, though it seems to be gaining acceptance.

Yes, that's kind of a weird term. Personally, I have a problem with "requirements" in most commercial organizations. I have worked in organizations where the requirements process is followed to the letter, the documents are created, and someone comes over to the engineering group and asks "How long will this take to build?"

We reply "Well, as written, about two years. But if you take out bullet points C, F, and G, I can do it by myself in about a month." And, magically, there is a new version of the document with C, F, and G removed.(*)

If that was the case, could we ever really say that C, F, and G were "Required"?

Non-functional testing has the same problem. It is lingo that doesn't -quite- match up with common sense English. Now, I have given up on trying to correct the requirements term; it's just too entrenched. Now, then again eliminating non-functional ... we might have a shot at that.


I like Para-functional.


--Matthew Heusser
(*) - I am exaggerating for effect here.