Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, July 12, 2007

Test Automation - III

Charlie Audritsh asked:

"I take you to mean what I'd refer to as a regression test. A test of mostly the old functionality that maybe did not change much.

So yeah. I have to admit there's a low likelihood of finding bugs with this. What nags at me though about this idea is that I still feel like regression tests are pretty important. We want them *not* to find bugs after all. We want to know we did not break anything inadvertently, indirectly, by what we did change."


I believe our emphasis on regression testing is mostly historical accident, and here's why:

Big Projects (and LANs) became popular about ten years before version control became popular. So, at the beginning of my career, when dinosaurs still roamed the earth, while I was working on a bug fix, another developer might be working on new functionality. If I saved my fix before she saves her changes, then she would "step" on my changes and they would be lost.

Thus the software would "regress" - fall back to an earlier state where a bug was re-injected into the code. The longer you delayed integration, the more likely that was to happen.

Today we have version control with automatic merge, automated unit tests, and Continuous Integration Servers. So this huge tar pit of regression just isn't as bad as it used to be. In fact, I distinctly remember reading a paper that showed that about five percent of the bugs introduced in the wild today are "true" regression bugs - bugs re-introduced by a mistake.

Of course, there's the second type of regression, which is to make sure that everything that worked yesterday works today. I'm all for using every tool at our disposal to ensure that, but I find that automated customer acceptance tests (FITnesse with fixtures) are very expensive in terms of set-up time, yet don't offer much value. Sure, the customer can walk by at any time and press a button and see a green light. Cool. But in terms of finding and fixing bugs?

If there is never enough time to do all the testing we would like to on projects, then I believe we are obligated to do the testing that has the most value for the effort involved.

And when it comes to bugs, I believe this is critical, investigative work done by a human. Assuming the devs have good unit tests in place, re-running tests from last week (or creating a framework so you can) probably has a lot less value than critical investigation right now, in the moment.

... but don't get me wrong. For example, for performance testing, you need to use a tool, and type some stuff in, and then evaluate the results. I'm saying that writing code on top of that, to have a single button, then evaluate the results for you and green or redbar - might not be the best use of your time.

I am very much in favor of model driven testing, but with every release, the model needs to change to test the new functionality.

"Set it and Forget it" customer acceptance testing?

Not So Much ...

3 comments:

Ben Simo said...

I use model-based automated regression GUI testing. These tests often find bugs that made it past unit testing. These tests are much easier to maintain than traditional scripted automation.

I believe there is value in automated regression testing via the GUI. This is rarely cost effective with traditional methods. And when it is said to be cost effective, I suspect it is due to the application not changing. And if the application is not changing, we may not need to be testing it. Yet, we may need to test to determine if it has changed. :)

I've been told that keyword-driven automated GUI regression testing can be used to achieve 95% automation with 5% of a team's resources. I don't believe it. I can't think of a context in which this is likely to be true.

Shrini Kulkarni said...

Ben -

>>> These tests often find bugs that made it past unit testing.

What makes these model (Finite state machine based formal model, I assume) based GUI regression tests to find bugs that Unit tests miss? Because Unit tests are very basic in nature - implemented more as "change detectors"?

>>> Yet, we may need to test to determine if it has changed. :)

I believe Units test can be more effectively applied to check if application has changed - not GUI level regression tests.

Though a silly question - Do we need to test an application to see if it has changed? Will developer or specs or any kind of formal communication help to know what has been changed?

I can think of two types of changes - there are "intended" changes (as per some spec/design doc and Developer) and there are "actual" changes.

Automation (Unit or GUI level) seem to be good at detecting "intented" changes (to the extent possible) but "actual" changes that happen may require a skilled human testing.

So when thinking about automation and checking if the code has changed - it would be useful to think what kind of change you are trying to detect?

Closer your automation code to product code (TDD style unit tests), effective will be "intended" change detection.

You can think of closeness of Automation code to product code as zoom level. GUI Tests are at zoom 1x where as TDD tests would be at say 30x - 100x depending upon the complexity and architecture of the application.


>> I've been told that keyword-driven automated GUI regression testing can be used to achieve 95% automation with 5% of a team's resources. I don't believe it.

Are you reffering to Hung's Book "Global Test Automation" ?

There was detailed discussion on this on software testing yahoo list -- It was about Action based Testing (Automation) ...

Shrini

Shrini Kulkarni said...

Matt --

You have mentioned about two types of regressions.

one that is due to code integration issues (bug fixes and new features) - Type I

And other - change in notion of "worked". Type II

Michael Bolton once told me this about regression testing - when people say that "Objective of regression testing is to ensure that what was working yesterday is working today as well" - They actual meant (and do) "Regression testing to make sure that tests that were passing yesterday, are passing today also".

This is because, "It works" = "It appear to satisfy some requirement to some degree in some one's understanding or view point" (I think James said this .. I am not sure ...)

So one good way to think about regression testing is to have a good battery of tests and make sure they pass on every build. Make no mistake - when these test pass - don't declare that features that were working before, are working now.

Note the difference between "works" vs "tests pass". The latter is more precise and safe claim to make than the former one.

BJ Rollison once blogged about Regrression testing strategies - I and Michael argued with him about usage and definition of the word "Regression testing"

You can read it here
http://blogs.msdn.com/imtesty/archive/2007/01/10/regression-testing-strategies.aspx


As you have rightly mentioned, Type I regression bugs can be contained by to resonable degree by careful and strict implementation of Source control procedures.

TDD style unit tests can also help in acting as change ( "intented") detectors.

Type II regression bugs are real freaky ones. World over, stakeholders crib about testing costs being high - it type II regression testing that they talk about. World over, automation tools are sold and tool vendors make hefty profits by selling tools that promise to automated this *boring* yet "Test to check .. Just in case ..." kind of tests. Here to cost savings, reduction in testing cycle time as promised by tool vendors is related checking if tests pass.

As I write this, I can think of a third type of regression - tests failing due to a platform upgrade (OS, webserver/app server or even hardware upgrades) or Application version upgrades. This kind of regression is more common in ERP space ... I was told that majority of spend in ERP implementation and maintenance goes to test (Type III regression testing).

Here to Unit tests can help to detect changes in previous and new version's behavior.

A big comment right? Over to you ...

Shrini