Last week Sean and I had lunch with Elisabeth Hendrickson, where we kicked around our ideas on interface-based testing.
Working with Elisabeth was great. A lot of "gurus" like to take over the discussion, manipulating the meeting so that they are the clear superior. The problem with this is that the meta-gaming can muck up the conversation - the actual problem that you are supposed to be trying to solve.
At the very least, I expected the audience to beat up on our ideas - to give heavily critical peer review. Hey, we can take it.
Elisabeth did neither. She listened, engaged us in a dialog, and gave clear, insightful peer review without having to resort to word violence, and without fighting over position.
I am double impressed.
She also has "TestObsessed" T-Shirts that are pretty neat; you can buy them in a CafePress store.
This got me to thinking about ExcelonDevelopment T-Shirts. I'd use the XNDEV logo in the top-left, but that's not much printing for a shirt.
So I think I would like to put some pithy sayings on the back.
My current idea is something like this:
Excelon Development "Bug Stop 2007" Tour
March 23rd - IQAA, Indianapolis, IN
May 16th - STAREast, Orlando, FL
Aug 23rd - GTAC, New York City
Nov 7th - GLSEC, Grand Rapids, MI
But I dunno. Here's a few other ideas, offhand ...
"All your bugs are belong to us"
"Test Automation that doesn't suck"
"Making Promises -- and Keeping them -- what a concept"
"When you are tired of methodology mania and metrics madness, call us"
"Because People Over Process should mean something"
"Ask me about software testing"
Do you have a better idea?
If I use your idea and order shirts, you'll get a free shirt.
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com
Thursday, May 31, 2007
Thursday, May 24, 2007
Conferences - IV
First - some errata - In the United States, next Monday is Memorial day, and I am about to take off on an extended holiday. So this may be my last post for a week.
Second, I've been doing a lot of research lately into the state of the practice of test automation, the state of the art, the state of the hype, and the difference between the three. (I'll get to the conference stuff, really.)
For example, one view of test automation is that it is a single, big button that you push, that runs all the tests, then reports status. Either you run all the features inputs and check expected results, or else you run a randomly generated set of tests against some oracle. For example - you are trying to test a statistics package, converting the inputs into forumulas, plugging that into MS Excel, and then checking for correctness. (By Oracle, I mean "Way of knowing what the right answer is", not the database or the greek mystics.)
There is an interesting class of problems that this kind of testing is very helpful for - especially if, for example, you are F-Secure and you are using massive virtualization to test a thousand configurations on linux at one time. Or if you have a simple Farenheit to Celcius conversion function, a good Oracle, and a million inputs to test.
Still, with that kind of testing, *all* the computer will check is the expected result. So if your webpage has an expected result of a $100.00 total bill, you get the correct total ... but the computer will not check for anything else. So if the wrong navigation buttons are greyed out, or there are typos, or the html tables are broken, or there are other errors, unless they are defined in the expected result, the computer won't check. In fact, Erwin Van Trier has gone so far as to recommend manual testing with no pre-defined expected result, as it can stifle your thinking.
There are other kinds of test automation - or, at least, software that can help you test.
Here are a few:
- Digial Voice Recorders - I find that popping out of the testing world to document slows me down. With a voice recorder, you can talk about what you are testing and why, and what bugs you've found, so you stay "in the zone" without having to pop up a document. Then you can document afterward, because you've left a trail of breadcrumbs.
- Screen capture tools like Spector and Snagit. With Spector, you can turn keyboard logging on and video logging on, so you know exactly how to reproduce the bug. With Snagit, you can make a movie of the bug occurring, creating a compelling story, quickly and easily.
- Test Explorer. I first found out about this at the expo at STAREast in 2005 - see, told you this was about the benefits of conferences! Test Explorer is a tool that makes manual testing go faster, by helping you record your charter, sessions, bugs found, as well as manual test cases. If you use session-based exploratory testing, it can even track your metrics for you - so you are accountable for how much time you've spent on what features.
- Little tools like Tasker, which is a windows based keystroke and mouse recorder. You can use tasker to record a stress test (file->new) then run the stress test a thousand times while you have lunch.
- Perl, Ruby, and other scripting languages than can process large amounts of data quickly. I find that I write a lot of one-off scripts with perl to, for example, make sure that every line in the file is 655 characters long, that all the dates are between 1/1/2005 and 1/1/2008, that all member ID's are nine digits, none are null, and so on. However, instead of having one big button, I usually have a half-dozen intermediate scripts and examine things along the way.
You can also use tools like tasker to do setup for expensive-to-set-up manual tests. Actually, that's another point I picked up from Jon Bach at the conference:
As testers, the things we do during can be classified into some broad lists:
1) Testing
2) Bug Investigation (Found something, now I'm going to look around)
3) Setup
4) Documenting Bugs
5) Documenting Testing
6) Going to meetings
7) Reading someone else's documentation
---> For test automation, I'm in favor of two approaches. Yes, the obvious "big button o' tests" which can work in some situations - but also - any tool that allows me to spend less time on numbers 2-7 so I can spend more time on number one.
Does that mean that a wiki is a test tool?
Absolutely.
But without conferences, I would not have heard of Test Explorer in 2005, or Wikis in 2003, or SpectorSoft probably ... ever ...
Second, I've been doing a lot of research lately into the state of the practice of test automation, the state of the art, the state of the hype, and the difference between the three. (I'll get to the conference stuff, really.)
For example, one view of test automation is that it is a single, big button that you push, that runs all the tests, then reports status. Either you run all the features inputs and check expected results, or else you run a randomly generated set of tests against some oracle. For example - you are trying to test a statistics package, converting the inputs into forumulas, plugging that into MS Excel, and then checking for correctness. (By Oracle, I mean "Way of knowing what the right answer is", not the database or the greek mystics.)
There is an interesting class of problems that this kind of testing is very helpful for - especially if, for example, you are F-Secure and you are using massive virtualization to test a thousand configurations on linux at one time. Or if you have a simple Farenheit to Celcius conversion function, a good Oracle, and a million inputs to test.
Still, with that kind of testing, *all* the computer will check is the expected result. So if your webpage has an expected result of a $100.00 total bill, you get the correct total ... but the computer will not check for anything else. So if the wrong navigation buttons are greyed out, or there are typos, or the html tables are broken, or there are other errors, unless they are defined in the expected result, the computer won't check. In fact, Erwin Van Trier has gone so far as to recommend manual testing with no pre-defined expected result, as it can stifle your thinking.
There are other kinds of test automation - or, at least, software that can help you test.
Here are a few:
- Digial Voice Recorders - I find that popping out of the testing world to document slows me down. With a voice recorder, you can talk about what you are testing and why, and what bugs you've found, so you stay "in the zone" without having to pop up a document. Then you can document afterward, because you've left a trail of breadcrumbs.
- Screen capture tools like Spector and Snagit. With Spector, you can turn keyboard logging on and video logging on, so you know exactly how to reproduce the bug. With Snagit, you can make a movie of the bug occurring, creating a compelling story, quickly and easily.
- Test Explorer. I first found out about this at the expo at STAREast in 2005 - see, told you this was about the benefits of conferences! Test Explorer is a tool that makes manual testing go faster, by helping you record your charter, sessions, bugs found, as well as manual test cases. If you use session-based exploratory testing, it can even track your metrics for you - so you are accountable for how much time you've spent on what features.
- Little tools like Tasker, which is a windows based keystroke and mouse recorder. You can use tasker to record a stress test (file->new) then run the stress test a thousand times while you have lunch.
- Perl, Ruby, and other scripting languages than can process large amounts of data quickly. I find that I write a lot of one-off scripts with perl to, for example, make sure that every line in the file is 655 characters long, that all the dates are between 1/1/2005 and 1/1/2008, that all member ID's are nine digits, none are null, and so on. However, instead of having one big button, I usually have a half-dozen intermediate scripts and examine things along the way.
You can also use tools like tasker to do setup for expensive-to-set-up manual tests. Actually, that's another point I picked up from Jon Bach at the conference:
As testers, the things we do during can be classified into some broad lists:
1) Testing
2) Bug Investigation (Found something, now I'm going to look around)
3) Setup
4) Documenting Bugs
5) Documenting Testing
6) Going to meetings
7) Reading someone else's documentation
---> For test automation, I'm in favor of two approaches. Yes, the obvious "big button o' tests" which can work in some situations - but also - any tool that allows me to spend less time on numbers 2-7 so I can spend more time on number one.
Does that mean that a wiki is a test tool?
Absolutely.
But without conferences, I would not have heard of Test Explorer in 2005, or Wikis in 2003, or SpectorSoft probably ... ever ...
Wednesday, May 23, 2007
Conferences - III
So, I was going to blog about all the neat people I met at STAREast - Ben Simo, Dawn Hayes, Susan Herrick, Shrini Kulkarni, Iris Trout, and Dani Almog topping the list. I believe these people are what Cem Kaner refers to as the "Next Generation Test Architects."
Then I was going to blog about the vendors and the expo. Despite Dani's doubts, I think there were at least two vendors with new, interesting ideas - of course, if you've never been to one, then all the vendors had something to offer. (And not just trinkets for the kids, either ...)
But first, some breaking news!
Remember my promise to help you get to a conference? Well, I intended to wax philosophic about personal excellence and how to attract opportunities - but if you live on the eastern seaboard, it's here.
The google test automation conference is August 23rd & 24th. It's run by google for reputation and excellence, not money, and it is free. That's right -- if you are accepted, then all you have to cover are your travel expenses. It is two days, so your boss might actually let you off. More importantly, it is prestigious even to attend, and it's Google. Suddenly, convincing the boss to get the time off just got a lot easier, right?
The interesting thing about a free conference is that you can have an application process even for attendees - and GTAC does. You can apply to attend GTAC here - applications are due by June 15.
I'll be there, giving a talk about, ironically enough, a particular form of developer-facing design verification called Interaction Based Testing.
If you can't make it but are interested, email me; I'm always looking for peer review.
If you can make it, then cool, let's chat at the conf!
If you live in the eastern seaboard and don't apply for this, then you void the guarantee. If you live elsewhere, keep reading - I'll get you to a conference, really ...
Then I was going to blog about the vendors and the expo. Despite Dani's doubts, I think there were at least two vendors with new, interesting ideas - of course, if you've never been to one, then all the vendors had something to offer. (And not just trinkets for the kids, either ...)
But first, some breaking news!
Remember my promise to help you get to a conference? Well, I intended to wax philosophic about personal excellence and how to attract opportunities - but if you live on the eastern seaboard, it's here.
The google test automation conference is August 23rd & 24th. It's run by google for reputation and excellence, not money, and it is free. That's right -- if you are accepted, then all you have to cover are your travel expenses. It is two days, so your boss might actually let you off. More importantly, it is prestigious even to attend, and it's Google. Suddenly, convincing the boss to get the time off just got a lot easier, right?
The interesting thing about a free conference is that you can have an application process even for attendees - and GTAC does. You can apply to attend GTAC here - applications are due by June 15.
I'll be there, giving a talk about, ironically enough, a particular form of developer-facing design verification called Interaction Based Testing.
If you can't make it but are interested, email me; I'm always looking for peer review.
If you can make it, then cool, let's chat at the conf!
If you live in the eastern seaboard and don't apply for this, then you void the guarantee. If you live elsewhere, keep reading - I'll get you to a conference, really ...
Tuesday, May 22, 2007
Tester/Dev Communications
The space between computer programmers and testers is shrinking - but there are still a lot of communications problems. For example, when I first met Harry Robinson, he was working at Microsoft, and pointed out that Microsoft had a very hard time finding "true" Developer/Testers. Oh, they had plenty of good manual testers who were professionals and had read the literature. And they had plenty of programmers who wanted to write test frameworks - frameworks that allowed somebody else to define and automate tests. The problems was finding programmers who were excited about testing - who wanted to design good tests and then automate them against existing systems.
Harry was interested in me because I was a tester who likes to automate stuff, instead of an automater who likes to build frameworks.
I view this as a fundamental problem for dev/testers - software testing takes expertise. Taking a dev and saying "write test automation" is about as useless as telling a tester to write code.
And true, skilled testers aren't helping much. To explain this, I drew a picture:
First of all, complaining that programmers don't know how to test doesn't help them. Then we make it worse by adding a hurdle of test education "Before you can talk to me about testing, go read Beizer, Meyers, Jorgensen, Kaner, Bach and Copeland" -- all good books, but they don't help a developer write automation today.
For requirements, we finally figured out that you need two people in the room - both the customer and someone technical - to balance the cost and the business value. I submit that it's the same thing with test automation. Start with pairing a dev and tester, and eventually, grow the skill set so that both people can do test automation.
There are only a few people in the world of software that are working on this. Brian Marick teaches courses in unit testing for developers, and Elisabeth Hendrickson works in that space as well. As for me, I'll be doing my part, speaking at the Google Test Automation Conference this August in New York.
We're getting close to bridging the gap between programming and test. Let's keep going.
UPDATE: Jon Kohl points me to this article he wrote in Better Software Magazine, expressing some of the same concepts.
Harry was interested in me because I was a tester who likes to automate stuff, instead of an automater who likes to build frameworks.
I view this as a fundamental problem for dev/testers - software testing takes expertise. Taking a dev and saying "write test automation" is about as useless as telling a tester to write code.
And true, skilled testers aren't helping much. To explain this, I drew a picture:
First of all, complaining that programmers don't know how to test doesn't help them. Then we make it worse by adding a hurdle of test education "Before you can talk to me about testing, go read Beizer, Meyers, Jorgensen, Kaner, Bach and Copeland" -- all good books, but they don't help a developer write automation today.
For requirements, we finally figured out that you need two people in the room - both the customer and someone technical - to balance the cost and the business value. I submit that it's the same thing with test automation. Start with pairing a dev and tester, and eventually, grow the skill set so that both people can do test automation.
There are only a few people in the world of software that are working on this. Brian Marick teaches courses in unit testing for developers, and Elisabeth Hendrickson works in that space as well. As for me, I'll be doing my part, speaking at the Google Test Automation Conference this August in New York.
We're getting close to bridging the gap between programming and test. Let's keep going.
UPDATE: Jon Kohl points me to this article he wrote in Better Software Magazine, expressing some of the same concepts.
Monday, May 21, 2007
Conferences - II
The thing about international conferences is that they are, well, international.
That means that the "offshore menace" that we talk about at the water cooler is sitting in the table across for you.
This begs the question: How should I think about offshore development? On the one hand, we'd like to welcome offshore technical folks into the fold, share and collaborate. On the other, on many projects we are competing directly against them for work - and the offshore folks have a huge economic advantage.
I've been struggling with this for some time. My Master's Capstone was on outsourcing (not actually offshoring), and a I've published an article or two on the subject.
So here I am, at STAREast, talking to Shrini Kulkarni, one of two Indian blogs that I read regularly - and I am about to stand up and give a lighting talk on Offshore development and testing.
Here goes ...
First of all, I think the economic disparity between offshore and local work is shrinking. The biggest software factory in India is Bangalore - a single city - with serious transportation issues. This means that the number of qualified people available to work jobs in India is bound, but the demand keeps increasing. This drives wages up very quickly. Right now, from the spec sheets I see, you can pay about the same rate for a qualified tester in Nebraska, Idaho, or Iowa as you will for one in Bangalore. Also, American on-shore testing services tend to have less layers of management than large, CMMI-rated shops in India. Of course, you can pay more in the US by hiring IBM and pay less in India by hiring a small shop, but you get the generality.
Second, companies that outsource work will take a huge productivity hit when they go past three time zones. Oh, you can use webcams and work evenings and such, but if your primary communication language is English, then you are subject to an intense amount of vagaries and confusion. Keep in mind, English was invented in the middle ages to serve the needs of the populace; the educated folk had Latin. Unlike code, English just isn't well suited for communicating things. What you need to do is to periodically re-connect, assess, and remove the communication gaps. With offshore work, this is very expensive.
Third, when outsourcing works, it sort of "scrapes off the bottom" the work that is predictable, simple, easy, and doesn't add much value - sourcing that to the lowest bidder. If you are doing technology work in the US, you don't want to be scrapped off the bottom. And if you are leading and helping and collaborating with folks, you don't need to be afraid of the bottom-scrapers - because you are sitting at the top.
My conclusion is that outsourcing is good for good technologists (We will continue to gain expertise and reputation by helping the community), bad for bad technologists (who will be scraped off the bottom) but good for customers - who let the free market decide.
Because of the issues above, I think the two sides are about even. When I was ten, my Dad said something like "May the best man win" as a value system. If US staffers are getting over-paid to do little work, heck, may the best man win. In the mean time, let's work together to collaborate, because Iron sharpens iron.
I also would like to point out that the sheer price (in Rupees) of a plane ticket from India to the 'States virtually guarantees that the internationals who attend conferences have something to say and share - these are people who are truly committed to the field. Let's work together
Epilogue
Because of the physical distance, the only way to meet a Shrini or a Pradeep or a Harish Krishnakutty in person is to attend an international conference. And if you do, a lightning talk like the one above is literally just five minutes of the content that you'll get. As for how to get to the conference, again, read on ...
That means that the "offshore menace" that we talk about at the water cooler is sitting in the table across for you.
This begs the question: How should I think about offshore development? On the one hand, we'd like to welcome offshore technical folks into the fold, share and collaborate. On the other, on many projects we are competing directly against them for work - and the offshore folks have a huge economic advantage.
I've been struggling with this for some time. My Master's Capstone was on outsourcing (not actually offshoring), and a I've published an article or two on the subject.
So here I am, at STAREast, talking to Shrini Kulkarni, one of two Indian blogs that I read regularly - and I am about to stand up and give a lighting talk on Offshore development and testing.
Here goes ...
First of all, I think the economic disparity between offshore and local work is shrinking. The biggest software factory in India is Bangalore - a single city - with serious transportation issues. This means that the number of qualified people available to work jobs in India is bound, but the demand keeps increasing. This drives wages up very quickly. Right now, from the spec sheets I see, you can pay about the same rate for a qualified tester in Nebraska, Idaho, or Iowa as you will for one in Bangalore. Also, American on-shore testing services tend to have less layers of management than large, CMMI-rated shops in India. Of course, you can pay more in the US by hiring IBM and pay less in India by hiring a small shop, but you get the generality.
Second, companies that outsource work will take a huge productivity hit when they go past three time zones. Oh, you can use webcams and work evenings and such, but if your primary communication language is English, then you are subject to an intense amount of vagaries and confusion. Keep in mind, English was invented in the middle ages to serve the needs of the populace; the educated folk had Latin. Unlike code, English just isn't well suited for communicating things. What you need to do is to periodically re-connect, assess, and remove the communication gaps. With offshore work, this is very expensive.
Third, when outsourcing works, it sort of "scrapes off the bottom" the work that is predictable, simple, easy, and doesn't add much value - sourcing that to the lowest bidder. If you are doing technology work in the US, you don't want to be scrapped off the bottom. And if you are leading and helping and collaborating with folks, you don't need to be afraid of the bottom-scrapers - because you are sitting at the top.
My conclusion is that outsourcing is good for good technologists (We will continue to gain expertise and reputation by helping the community), bad for bad technologists (who will be scraped off the bottom) but good for customers - who let the free market decide.
Because of the issues above, I think the two sides are about even. When I was ten, my Dad said something like "May the best man win" as a value system. If US staffers are getting over-paid to do little work, heck, may the best man win. In the mean time, let's work together to collaborate, because Iron sharpens iron.
I also would like to point out that the sheer price (in Rupees) of a plane ticket from India to the 'States virtually guarantees that the internationals who attend conferences have something to say and share - these are people who are truly committed to the field. Let's work together
Epilogue
Because of the physical distance, the only way to meet a Shrini or a Pradeep or a Harish Krishnakutty in person is to attend an international conference. And if you do, a lightning talk like the one above is literally just five minutes of the content that you'll get. As for how to get to the conference, again, read on ...
Sunday, May 20, 2007
New Article on DDJ.com ...
This week I published Are Tests Requirements? which is inspired by a discussion on the agile-testing list. As always, I appreciate any feedback.
Saturday, May 19, 2007
Conferences - I
I just got back from STAREast and had an absolute blast. Events like that are beneficial in so many ways. You can recharge your batteries, get new ideas, meet new people, and, often, realize that no, you are not insane - instead, the project constraints are. Just like thevarious 12-step programs, recognizing that is the step out of the rabbit hole.
Now that it's over, I think back to all of the peers and colleagues I know who have never attended a software conference. Oh, there are lots of painful and valid reasons. The company won't pay for it - I can't get the time off - it's too expensive - the list goes on. Yet software development is hard. A sabbatical may be too much to wish for, but a few days a year to learn, rest and network is reasonable. We should all have that.
So, here is my suggestion. I'm going to start a series on software conferences, starting with all the neat things that happened at STAREast - why they are so important and why you should want to come. That's just to wet your appetite.
But then I'm going to explain the Matt Heusser, surefire, (all most) guaranteed way - to - get - yourself - to- a - conference - o-matic.
Boss too cheap? Doesn't matter.
Time off an issue? Not for this.
Travel budget frozen? No problem.
If you would like to change attending a conference from a wish to a reality, then, dear reader, read on ...
Now that it's over, I think back to all of the peers and colleagues I know who have never attended a software conference. Oh, there are lots of painful and valid reasons. The company won't pay for it - I can't get the time off - it's too expensive - the list goes on. Yet software development is hard. A sabbatical may be too much to wish for, but a few days a year to learn, rest and network is reasonable. We should all have that.
So, here is my suggestion. I'm going to start a series on software conferences, starting with all the neat things that happened at STAREast - why they are so important and why you should want to come. That's just to wet your appetite.
But then I'm going to explain the Matt Heusser, surefire, (all most) guaranteed way - to - get - yourself - to- a - conference - o-matic.
Boss too cheap? Doesn't matter.
Time off an issue? Not for this.
Travel budget frozen? No problem.
If you would like to change attending a conference from a wish to a reality, then, dear reader, read on ...
Monday, May 14, 2007
Limericks
Not only did I actually get a reader from Limerick, Ireland, last month, but May 12th was National Limerick day.
In honor of that Jeff Fry made a series of testing Limericks - here's my favorite:
Certifications are always a-fruitin’
To make resumes highfalutin’.
If they don’t certify
What they try to imply -
Our craft, I’m afraid they’re pollutin’
Go ahead, read the whole list. It's not too bad, really ...
In honor of that Jeff Fry made a series of testing Limericks - here's my favorite:
Certifications are always a-fruitin’
To make resumes highfalutin’.
If they don’t certify
What they try to imply -
Our craft, I’m afraid they’re pollutin’
Go ahead, read the whole list. It's not too bad, really ...
Sunday, May 13, 2007
Software Testing Skills - II
My last post was the quick, fast, off-the-cuff sort of answer I give to a mailing list on my lunch hour.
Paul Carvalho has actually taken time to do a more studied and considered introduction - here. Although he is coming from a different starting place, I think he's got some very good stuff there.
Also, Jon Kohl pointed out one thing missing from my list: Improvisation. They guy has a very, very strong point.
I would be interested in knowing what you think is missing from the two lists. I would really enjoy putting together something like Paul has, but from my perspective. Perhaps if I started it as a series of blog posts ....
Paul Carvalho has actually taken time to do a more studied and considered introduction - here. Although he is coming from a different starting place, I think he's got some very good stuff there.
Also, Jon Kohl pointed out one thing missing from my list: Improvisation. They guy has a very, very strong point.
I would be interested in knowing what you think is missing from the two lists. I would really enjoy putting together something like Paul has, but from my perspective. Perhaps if I started it as a series of blog posts ....
Friday, May 11, 2007
Recent post on the agile testing list
I just posted this on the Agile-Testing list; it is rough, uncouth, and just a start, but it was fun to write and I wanted to share.
If I could just polish the general ideas here, it might actually be the start of something ...
Will be interested to know more how testing skills are different from approach/methods etc. Also, what are these skills? Can they be learn, if yes how? Will appreciate any literature on the subject.
Super cool.
I submit to you that testing is applied critical thinking and general systems thinking. So, this is my quick and dirty list of influences for how to sharpen your brain.
For critical thinking, I'd suggest the classics - Plato, Socrates, Aristotle, but they might bore you to tears. :-)
For something a bit more modern, consider "I, Robot" by Asimov, or Maybe "Starship Troopers" by Heinlein. For extra credit, compare and contrast the movie to the book.
A bit more seriously, I got a *LOT* out of my course in "Logic and Rational Thought" at Hood College that I took in 1994.
The Course Description
The Text Book
The course included symbolic logic, truth tables and decision trees, which is basically equivalence classes for smarties.
A few years later I took discrete math at Salisbury University, which covered similar material, with a bit of a math bent. Checking on Amazon, this book looks pretty good.
From the intro blurb:
"Discrete Mathematics and its Applications is a focused introduction to the primary themes in a discrete mathematics course, as introduced through extensive applications, expansive discussion, and detailed exercise sets. These themes include mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and enhanced problem-solving skills through modeling."
For general systems thinking applied to IS, consider the works of Jerry Weinberg. "Quality Software Management: Volume I" and "Becoming a technical leader" are a pretty good place to start.
Personally, I've got a good bit out of military science (http://www.xndev.com/articles/articles.htm - Search for "Leading the Way"), and a little bit out of cognitive science and educational theory. (Read Up on Richard Fenyman)
When I a child, my Father taught me to play chess. I would strongly recommend any game that involves making choices, trade offs, and optimizations. "Car Wars" By Steve Jackson games is an oldie but a goody; games by Avalon Hill are also good.
There's also anything by Orson Scott Card ("Treason" and the Ender Series come to mind) or "Who's to Say" by Norman Melchert.
Again, that is totally offhand. James Bach has a number of "testing challenges" that can strengthen your brain bone. Studying Physics, Math, and applied problem solving (proofs) can help.
Hey Bolton, what did I miss?
--
Matthew Heusser,
Blog: http://xndev.blogspot.com
If I could just polish the general ideas here, it might actually be the start of something ...
Will be interested to know more how testing skills are different from approach/methods etc. Also, what are these skills? Can they be learn, if yes how? Will appreciate any literature on the subject.
Super cool.
I submit to you that testing is applied critical thinking and general systems thinking. So, this is my quick and dirty list of influences for how to sharpen your brain.
For critical thinking, I'd suggest the classics - Plato, Socrates, Aristotle, but they might bore you to tears. :-)
For something a bit more modern, consider "I, Robot" by Asimov, or Maybe "Starship Troopers" by Heinlein. For extra credit, compare and contrast the movie to the book.
A bit more seriously, I got a *LOT* out of my course in "Logic and Rational Thought" at Hood College that I took in 1994.
The Course Description
The Text Book
The course included symbolic logic, truth tables and decision trees, which is basically equivalence classes for smarties.
A few years later I took discrete math at Salisbury University, which covered similar material, with a bit of a math bent. Checking on Amazon, this book looks pretty good.
From the intro blurb:
"Discrete Mathematics and its Applications is a focused introduction to the primary themes in a discrete mathematics course, as introduced through extensive applications, expansive discussion, and detailed exercise sets. These themes include mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and enhanced problem-solving skills through modeling."
For general systems thinking applied to IS, consider the works of Jerry Weinberg. "Quality Software Management: Volume I" and "Becoming a technical leader" are a pretty good place to start.
Personally, I've got a good bit out of military science (http://www.xndev.com/articles/articles.htm - Search for "Leading the Way"), and a little bit out of cognitive science and educational theory. (Read Up on Richard Fenyman)
When I a child, my Father taught me to play chess. I would strongly recommend any game that involves making choices, trade offs, and optimizations. "Car Wars" By Steve Jackson games is an oldie but a goody; games by Avalon Hill are also good.
There's also anything by Orson Scott Card ("Treason" and the Ender Series come to mind) or "Who's to Say" by Norman Melchert.
Again, that is totally offhand. James Bach has a number of "testing challenges" that can strengthen your brain bone. Studying Physics, Math, and applied problem solving (proofs) can help.
Hey Bolton, what did I miss?
--
Matthew Heusser,
Blog: http://xndev.blogspot.com
Thursday, May 10, 2007
Lightning Talk Mojo - II
Hopefully, in the past few posts, I convinced you that we do lightning talks every day - the questions are:
1) Do we do them well?
2) How can we improve?
My first advice is to give a lightning talk at a local user's group or conference. If you have never given a presentation before, it's a low-impact way to get into public speaking. If you do big talks at major conferences all the time, then lightning talks are a great technique to hone your points.
-> You can find a short article on giving a lightning talks here.
-> Mark Jason Dominus's Presentation Judo has some really good information on how to do a slide - and what not to do.
-> There's a video of Lightning Talks on Google Video Here.
The video is front an XPWestMichigan Meeting; my talk starts about 28 minutes in. The video and audio aren't that great.
If you can actually hear what I am saying, I may offer an analysis of that lightning talk for next time --- but then again, that's pretty darn meta ...
1) Do we do them well?
2) How can we improve?
My first advice is to give a lightning talk at a local user's group or conference. If you have never given a presentation before, it's a low-impact way to get into public speaking. If you do big talks at major conferences all the time, then lightning talks are a great technique to hone your points.
-> You can find a short article on giving a lightning talks here.
-> Mark Jason Dominus's Presentation Judo has some really good information on how to do a slide - and what not to do.
-> There's a video of Lightning Talks on Google Video Here.
The video is front an XPWestMichigan Meeting; my talk starts about 28 minutes in. The video and audio aren't that great.
If you can actually hear what I am saying, I may offer an analysis of that lightning talk for next time --- but then again, that's pretty darn meta ...
Wednesday, May 09, 2007
Creative Chaos Link of the Day
The link of the day is: What Works in Software Development (Or - How To Be Lazy Without Really Trying) (HTBL)
HTBL is, essentially, an agile methodology for one person.
Mike Schwern is one of the perl QA Gurus; I took HTBL from him in 2003 at the Open Source Conference, along with Test::Tutorial. I use his definitions for 'test' and 'unit', which do not exactly have universal acceptance.
But they are good ...
HTBL is, essentially, an agile methodology for one person.
Mike Schwern is one of the perl QA Gurus; I took HTBL from him in 2003 at the Open Source Conference, along with Test::Tutorial. I use his definitions for 'test' and 'unit', which do not exactly have universal acceptance.
But they are good ...
Thursday, May 03, 2007
STAR East Lightning Talk LineUp!
I am pleased to announce the initial line-up for lightning talks at STAR East, 2007:
Scott Barber, Performance Testing in Five Minutes
James Bach, What I do when I see a product for the first time
Erwin Van Trier, Expect the right result
Matthew Heusser, Random Thoughts on Offshore Testing
Shrini Kulkarni, Traps in Test Effort Estimation
Marcia Knous, Don't Break The Web
Sivakumar Thekkenaduvath, Improving Test Automation Efficiency
Leland Smith, Selling Inspections
Michael Bolton, Rapid Test Estimation
---> We had two cancellations in the weeks leading up to the conference, so I will be stepping in. My thoughts on Offshoring will be the honest truth that so many are scared to discuss. My goal is to begin a genuine, non-buzzwordy discussion of the economics of outsourcing.
Of course, my opinion changes as my experience and knowledge change, and I research all the time. If you have strong opionions about outsourced testing, drop me a line, or look for me at the conference --- but do it before Thursday, May 17th!
Scott Barber, Performance Testing in Five Minutes
James Bach, What I do when I see a product for the first time
Erwin Van Trier, Expect the right result
Matthew Heusser, Random Thoughts on Offshore Testing
Shrini Kulkarni, Traps in Test Effort Estimation
Marcia Knous, Don't Break The Web
Sivakumar Thekkenaduvath, Improving Test Automation Efficiency
Leland Smith, Selling Inspections
Michael Bolton, Rapid Test Estimation
---> We had two cancellations in the weeks leading up to the conference, so I will be stepping in. My thoughts on Offshoring will be the honest truth that so many are scared to discuss. My goal is to begin a genuine, non-buzzwordy discussion of the economics of outsourcing.
Of course, my opinion changes as my experience and knowledge change, and I research all the time. If you have strong opionions about outsourced testing, drop me a line, or look for me at the conference --- but do it before Thursday, May 17th!
Wednesday, May 02, 2007
Lightning Talk Mojo - I
Breathing readers of Creative Chaos know that I am facilitating Lightning TalksatSTAREast 2007 - astute ones realize that I am a genuine fan of the concept.
I would like to tell you why.
This year, I've been doing a good bit of back and forth with the lightning talk speakers - encouraging them to turn off powerpoint, and really talk to the audience.
One speaker wrote back that doing a whole five minute speech without powerpoint assistance would be hard.
And he's right.
But that's bad.
Powerpoint is a crutch.
Here's the deep, dark secret of presentations:
Outside of the basic, introductory, everything-I-say-is-new type of talk, in a typical presentation, your audience will leave with just a few nuggets. Sometimes, you only have one nugget, and spend the entire hour beating the audience over the head with it.
Still, most of the time, the audience is listening for insight. The more insights you can sprinkle into your talk, the better.
So, let's assume that a good nugget takes about five minutes to explain. That means it should be possible to do ten nuggets in a talk. Of course,different people need different things, so we'll assume that for any given audience member, only have of the "nuggets" will connect.
Now let's examine the typical 1-hour auto-content-generated slideware talk for a moment:
Start with time for ten nuggets
Intro - 10 minutes - Subtract two. Eight left.
Conclusion - 10 minute - Subtract two. Six left.
Q&A - 15 minutes - Subtract three. THREE NUGGETS LEFT.
Divide by two, because only half of the ideas will be relevant to any audience member.
That means for a one-hour talk, you get to make have about one-point-five actual insights that can change behavior.
That is a lot of sitting around, waiting for something to happen, and not much happening.
How can we do better?
The Intro/Body/Conclusion/QA style is, well, redundant. If you study Toastmasters, they openly admit this - the whole point is to tell 'em three times, so your single point comes across.
That might work when you are briefing the boss, but at a technical conference, why prove one point when you could prove ten? Even if half your stuff doesn't apply, and the audience hates half that does apply, heck - you still get to make two and a half points an hour. :-)
I see at least two problems with this:
First, learning to make a point succinctly in five minutes is hard.
Second, the very cognitive format of powerpoint, with it's bullets and lists, tends to turn your stuff into marketing-ware. Trying to make one point with powerpoint in one slide (or two) is, well ... hard.
My suggestions are -
1) Pschologists have discovered a method called "chunking" that people use to memorize extremely large pieces of material. Essentially, you take a big piece of data and split it into many small groups. For example, if you meet someone who has memorized PI to 1,000 digits, you will find that he probably hasn't memorized a thousand numbers at all. Instead, he has memorized a hundred-odd "chunks", where each chunk is five to ten numbers.
So is a one-hour talk a collection of ten "chunks"
2) Look into other communications options like handouts, discussion, or writing on an Easel. Read The Cognitive Value of PowerPoint by Tufte and The Gettysburg PowerPoint Presentation by Norvig.
3) If you really want to use powerpoint, consider the "one big slide per point" approach. That means reading, listening, and watching people who use this technique effectively - Tim Lister is a good one to follow.
4) Get really good at making a single point, making it well, and moving on. A good way to do that is to give lightning talks at conferences ...
Have I sold you on lightning talks yet?
If yes, your next question is probably "ok, so how do I get really good at chunking my talk?
More to come.
Post-Script: Paul Graham has an article on similar themes about writing essays - you can find it here. Even if you don't agree with the guy, you'll enjoy the read, I promise.)
I would like to tell you why.
This year, I've been doing a good bit of back and forth with the lightning talk speakers - encouraging them to turn off powerpoint, and really talk to the audience.
One speaker wrote back that doing a whole five minute speech without powerpoint assistance would be hard.
And he's right.
But that's bad.
Powerpoint is a crutch.
Here's the deep, dark secret of presentations:
Outside of the basic, introductory, everything-I-say-is-new type of talk, in a typical presentation, your audience will leave with just a few nuggets. Sometimes, you only have one nugget, and spend the entire hour beating the audience over the head with it.
Still, most of the time, the audience is listening for insight. The more insights you can sprinkle into your talk, the better.
So, let's assume that a good nugget takes about five minutes to explain. That means it should be possible to do ten nuggets in a talk. Of course,different people need different things, so we'll assume that for any given audience member, only have of the "nuggets" will connect.
Now let's examine the typical 1-hour auto-content-generated slideware talk for a moment:
Start with time for ten nuggets
Intro - 10 minutes - Subtract two. Eight left.
Conclusion - 10 minute - Subtract two. Six left.
Q&A - 15 minutes - Subtract three. THREE NUGGETS LEFT.
Divide by two, because only half of the ideas will be relevant to any audience member.
That means for a one-hour talk, you get to make have about one-point-five actual insights that can change behavior.
That is a lot of sitting around, waiting for something to happen, and not much happening.
How can we do better?
The Intro/Body/Conclusion/QA style is, well, redundant. If you study Toastmasters, they openly admit this - the whole point is to tell 'em three times, so your single point comes across.
That might work when you are briefing the boss, but at a technical conference, why prove one point when you could prove ten? Even if half your stuff doesn't apply, and the audience hates half that does apply, heck - you still get to make two and a half points an hour. :-)
I see at least two problems with this:
First, learning to make a point succinctly in five minutes is hard.
Second, the very cognitive format of powerpoint, with it's bullets and lists, tends to turn your stuff into marketing-ware. Trying to make one point with powerpoint in one slide (or two) is, well ... hard.
My suggestions are -
1) Pschologists have discovered a method called "chunking" that people use to memorize extremely large pieces of material. Essentially, you take a big piece of data and split it into many small groups. For example, if you meet someone who has memorized PI to 1,000 digits, you will find that he probably hasn't memorized a thousand numbers at all. Instead, he has memorized a hundred-odd "chunks", where each chunk is five to ten numbers.
So is a one-hour talk a collection of ten "chunks"
2) Look into other communications options like handouts, discussion, or writing on an Easel. Read The Cognitive Value of PowerPoint by Tufte and The Gettysburg PowerPoint Presentation by Norvig.
3) If you really want to use powerpoint, consider the "one big slide per point" approach. That means reading, listening, and watching people who use this technique effectively - Tim Lister is a good one to follow.
4) Get really good at making a single point, making it well, and moving on. A good way to do that is to give lightning talks at conferences ...
Have I sold you on lightning talks yet?
If yes, your next question is probably "ok, so how do I get really good at chunking my talk?
More to come.
Post-Script: Paul Graham has an article on similar themes about writing essays - you can find it here. Even if you don't agree with the guy, you'll enjoy the read, I promise.)
Hosting Fix-ed!
Last week I switched hosting providers to GoDaddy. The only major mix-up I experience was that I lost the link to Chapter 15 of Jim Brosseau's book proposal, which is now fixed and available on-line here. As always, we appreciate any comments you have.
Subscribe to:
Posts (Atom)