--- In email@example.com, "adam_peter.knight"
>I was recently reading Lisa and Janet's book. In it
>it mentions that Janet's team "release every few iterations
>and might even have an entire iteration's worth of endgame
>activities to verify release readiness."
At Socialtext, we've released to production after /every/ iteration for something like 34 of the past 37 iterations. We use 2 week iterations. Let's join our process, already in progress:
Second Tuesday, Wednesday of iteration 1: Product Management works on iteration 2 stories, Dev developers iteration 1 stories, QA tests iteration 1 stories done by developers and "in QA" via story-tests, writing selenium RC test cases, and exploratory methods.
Second Thursday, Friday of iteration 1: devs estimating stories for iteration 2, PM assembles an iteration 2 story pool. Code is 'closed' to new stories, master branch is cut to iteration-2009-MM-DD branch, devs finish existing stories on the branch, move to QA. QA is testing existing stories via story-tests and exploratory methods on master, then that branch.
Last Weekend of iteration 1: Release candidate testing begins (selenium/automated test suite) on branch iteration-2009-MM-DD.
First Monday of iteration 2: Devs/QA 'Sign up' for stories for iteration 2, story kickoffs with whole team, QA continues candidate testing via exploratory coverage of features not automated, slideshows, and exploratory testing in general.
First Tuesday of iteration 2: Devs develop stories on master for iteration 2, QA performs upgrade tests for iteration 1.
First Wednesday of Iteration 2: (hopefully) upgrade goes to staging server, devs develop stories. First few stories may be in QA. QA beging story-testing, working on selenium automated test execution/evaluation backlog, or may perform upgrade testing if the upgrade will go to appliances.
First Thursday of Iteration 2 ... Second Tuesday: Developers develop on master, QA tests on master.
Iteration 1 branch will probably go to prod around the second Wednesday of iteration 2.
Essentially, because we release to prod /every/ iteration, we pay a moderate pain in regression testing /every/ iteration, but the pain does not "batch up" for releases. This is a pretty quick summary. For more details, you can consult my chapter of "Beautiful Testing" from O'Reilly. It will be available in October of 2009, but you can pre-order it from Amazon TODAY:
There are some other people you may have heard of who have contributed to the book, like that "Crispin" person. Something about a donkey, I can't remember exactly.
(PS: Now that I've beaten up on metrics, I have more actual /good/ example metrics, still to come!)