Schedule and Events

March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email:

Thursday, March 01, 2007

A Testing Challenge

As a fan of incremental/iterative methods, I like the idea of test automation.

In theory, everything should be retested every release, but with two-week iterations, that simply is not going to happen. With test automation, we can at least have some confidence that the software didn't introduce any major regression errors outside of the obvious features being changed.

So, it would be really nice to have a computer with a big button that says "test" that runs all the tests and comes back with a green (or red) light. For unit tests, I use these in spades.

The problem is customer acceptance tests. There are some tools for automating acceptance tests: Most notably FIT and Fitnesse.

Fit and Fitness take a set of inputs and compare that against a set of outputs. To do this, they have 'fixtures' that call into the application, call a function, get the results, and compare that to output. Since Fitnesse is written in Java, it can have 'hooks' into your application -- if that app is also written in Java or a language which FitNesse Supports.

This can work for standalone applications; logical silos that take an input, transform it, and provide an output.

Now the challenge:

----> Lately I've been working with IS Shops, not software companies. These are organizations that support a traditional, brick-and-mortar business. Instead of producing standalone apps, these organizations are more often integrating two applications.

The software that is being testing isn't the app itself (that ws commercial-off-the-shelf) but the data bridge between the apps.

For example, one application pulls data out of an ERP system, stores it as a flat file, and imports it into a financial system. The requirements are "Make the Financial System LOOK LIKE the ERP System for accounts A, B, and C."

Another application pulls data out of the ERP system and populated the data warehouse. A third creates a flat file which is sent over to a trading partner "Make sure the trading partner knows all our gold members, the member ID, and eligibility dates" are the requirements.

Think about it - the requirement is to take one set of black-box data, and import it into another black box. We can test the data file that is created, but the real proof is what the second system accepts -- or rejects.

And, no offense, but for some of these Apps, FIT isn't a very good fit.

First of all, the test databases used are refreshed every three months from production. That means that you either have to find test scenarios from live data (and hope they don't change next time you refresh) or re-enter every scenario in test every three months.

Now, take the trading partner example. The best you can do within your organization is to test the file. The interface might take three hours to run, then you GREP the file for results and examine. You'll have to write custom fixtures to do this, and your programming language isn't supported by FitNesse. Or you could write a fixture that takes a SELECT statement to count the number of rows that are generated by the file, run the interface, and compare.

Of course, a programmer is going to have to write the SELECT statement. Is it a valid acceptance test?

Or you could have the number of rows fixture be approximate - "Between 10,000 and 15,000" - customers could write this, and it guarantees that you didn't blow a join, but not much else.

You could write code that accesses the deep guts of the application, turning it sideways to generate a single member at a time, thus speeding up the acceptance test runs to a few seconds. That's great for the feedback loop, but it's more of a unit test than an acceptance test.

You could suggest I re-write the whole thing to use web services, but that introduces testing challenges of an entirely different kind. To be frank, when I have a problem and people suggest that I re-write the whole thing without recognizing that it would present an entirely different set of challenges, it's a sign to me of naiveté.

I submit that all of these would be a significant investment in time and effort for not a whole lot of value generated.

So, I still want to write customer acceptance tests, but I'm not sure this is the right paradigm to do it. I also have a handful of tricks and techniques I have used over the years to make this easier. I will reveal them in a future post, but in the mean time, here's my challenge:

What would you suggest to solve this puzzle?

I should add that I don't think this is a trivial puzzle; at least, more than half of the people of which I ask this give an answer that I believe to be unsatisfactory. Can you do better?


Elisabeth said...
This comment has been removed by the author.
Elisabeth said...

I started to write my answer in a comment, but it got too long. So I wrote a blog post instead.

Chris McMahon said...

As far as I'm concerned, the Bible for this kind of work is Gregor Hohpe's work at

In particular, check out

I've taken two approaches to this kind of work:

First, have tests that validate that any input to the system is processed correctly in the output. For instance, check that input Field 22 appears in the filename, and input Field 87 appears in the output file header, or whatever. This approach is amenable to risk-based analysis.

Second, account for the possible range of input data. Sometimes this means selecting records semi-randomly from the input system. Sometimes it means generating input data using semi-random methods.

Finally, run a LOT of tests. Run tests all night, run tests all week, run tests all month. Harry Robinson's work is a good reference for this sort of thing.

Tim Van Tongeren said...

If the live data refreshed from production is exhaustive and you cannot find a test scenario then you may technically have a bug in the code but you will not hit the bug with the current production data (you may hit it in the future with new production data.) Now you can wonder "if a bug exists in the code but doesn't make any noise did the tree really fall?"

One approach - get the live data from system A, run the data transformation/import into system B in a test environment, and get users from systems A and B in a room together to discuss whether system B properly represents the data from system A. The discussion should at least address some normal scenarios and several flavors of extreme/weird scenarios.

Mallikarjun Reddy said...

Few things to test apart from what are already mentioned,
-If A is pumping out the data at faster pace than speed with which B takes-in, in that case there should be proper queueing logic in the bridge.
-If suddenly B goes down then will the data pumped in by A is queued.
-If bridge itself is not able to process the data at speed with which A pushes in.

Code Inspections

Hubert Matthews said...

When faced with this sort of problem I usually try to reverse engineer the input files from the output files and compare. That way I can find fields and records that are missing, extra stuff, bad transformations, etc. Of course, this isn't always possible but it's a good approach if you can do it. You might have to extract information from the target system to check that the import was complete.

Integration testing and architectural-level issues are are a whole different kettle of fish and something that the agile guys neatly avoid discussing. Unit tests aren't the answer to everything.

Ben Simo said...

If I understand, you have three different processes taking data out of one system and placing it into three different systems/formats:

1) Another application
2) A database
3) A flatfile

The requirement to test is that the data in all three destination systems is the same as displayed by the source system.

Do I have this right?

Vondran said...

First of all: Great Blog! You really inspired me to learn more, thank you!
Would you say that Cubes are an option to solve this?
Once again, thanks for your contributtion to the testing world.
Best Regards, André Vondran

uiyui said...

You said you will love me wow gold the whole life, but WoW Gold you marry her. You said you will wow power leveling,come to marry me, but this will not be carried out forever.WoW Gold I am trying my best to forget you and do not love you anymore. wow leveling But I failed and I still love you. Maybe wow leveling she needs you more compared wow leveling with me. So I tell you that world of warcraft power leveling you should love world of warcraft power leveling her and take good world of warcraft leveling care of her. You said I was so of warcraft leveling Yes, because I love you,world of warcraft leveling I hope you will be power leveling happy forever.

belrion said...

buy wow goldbuy wow gold[url=]buy wow gold[/url]
[url=]buy wow gold [/url]

dreaz said...

I am grateful to you for this great content.aöf thanks radyo dinle cool hikaye very nice sskonlycinsellik very nice ehliyet turhoq home free kadın last go korku jomax med olsaoy hikaye lesto go müzik dinle free only film izle love aşk 09sas mp3 indir

nike said... : puma : femme puma : homme puma : chaussures puma : puma basket : puma : puma basket chaussures : basket puma : UGG Online Store : cheap ugg boots : black ugg boots : ugg boots on sale : cheap uggs : chi flat iron : chi flat irons : chi hair straightener : chi ceramic flat iron : chi iron : cheap hair straightener : hair straighteners : chi hair straightener : chi hair straighteners : chi flat iron : north face factory outlet : north face outlet store : discount north face jackets : north face jackets : spyder jackets : cheap north face jackets : cheap north face : north face outlet store : discount north face jackets : cheap north face jackets : spyder jackets

bing said... : chi hair straightener : chi flat iron : new polo shirts : cheap handbags : cheap bags : puma chaussures : chaussures puma : chaussure puma : Men's North Face : Women's North Face : hair straighteners : sexy lingerie store : cheap ugg boots : tattoo wholesale : men's clothing : women's clothing : cheap hair straighteners : cheap Lacoste polo shirts : cheap Lacoste polo shirts : 2009 nike shoes : new nike shoes : Women's max : Men's max 93 : nike shox : Nike air force : Nike air max 2003 : nike air max ltd : nike air max tn : Nike air rift : Nike air Yeezy : nike airmax : Nike air max 90 : Nike air max 97 : nike birds nest shoes : nike dunk : nike RT1 shoes : nike SB : nike shox shoes : Nike shox OZ shoes : Nike shox R2 shoes : Nike shox R3 shoes : Nike shox R4 shoes : Nike shox R5 shoes : Nike shox TL3 : nike trainers lovers : tennis rackets : Wilson tennis rackets : HEAD tennis rackets : Babolat tennis rackets

fijiaaron said...

Rewrite the financial system and the ERP system. It will probably take less time, and you'll have a more stable, robust, featureful, user friendly, and testable system to boot.

I hate those "enterprise" apps. I'm surprised though that you're even talking about using SQL, since things like Siebel, Peoplesoft, etc. do their worst to keep your hands off of your data in your database.

Seriously, though, that's the biggest challenge. Documenting your acceptance criteria and pointing tests to them is about all you can do. I'm working (a few hours a week) on a fitnesse-like tool, but there isn't a really good way to do it.

Cucumber ( is popular, but it runs into the same problem -- it's code, and a very limited grammar. I'm convinced that plain english documentation tied to good old fashioned test scripts (either manual or automated) is the way to go.

I don't want to say it, but something like Test Director/Quality Center is where I'm headed, but using open source tools like Selenium and xUnit.