As a fan of incremental/iterative methods, I like the idea of test automation.
In theory, everything should be retested every release, but with two-week iterations, that simply is not going to happen. With test automation, we can at least have some confidence that the software didn't introduce any major regression errors outside of the obvious features being changed.
So, it would be really nice to have a computer with a big button that says "test" that runs all the tests and comes back with a green (or red) light. For unit tests, I use these in spades.
The problem is customer acceptance tests. There are some tools for automating acceptance tests: Most notably FIT and Fitnesse.
Fit and Fitness take a set of inputs and compare that against a set of outputs. To do this, they have 'fixtures' that call into the application, call a function, get the results, and compare that to output. Since Fitnesse is written in Java, it can have 'hooks' into your application -- if that app is also written in Java or a language which FitNesse Supports.
This can work for standalone applications; logical silos that take an input, transform it, and provide an output.
Now the challenge:
----> Lately I've been working with IS Shops, not software companies. These are organizations that support a traditional, brick-and-mortar business. Instead of producing standalone apps, these organizations are more often integrating two applications.
The software that is being testing isn't the app itself (that ws commercial-off-the-shelf) but the data bridge between the apps.
For example, one application pulls data out of an ERP system, stores it as a flat file, and imports it into a financial system. The requirements are "Make the Financial System LOOK LIKE the ERP System for accounts A, B, and C."
Another application pulls data out of the ERP system and populated the data warehouse. A third creates a flat file which is sent over to a trading partner "Make sure the trading partner knows all our gold members, the member ID, and eligibility dates" are the requirements.
Think about it - the requirement is to take one set of black-box data, and import it into another black box. We can test the data file that is created, but the real proof is what the second system accepts -- or rejects.
And, no offense, but for some of these Apps, FIT isn't a very good fit.
First of all, the test databases used are refreshed every three months from production. That means that you either have to find test scenarios from live data (and hope they don't change next time you refresh) or re-enter every scenario in test every three months.
Now, take the trading partner example. The best you can do within your organization is to test the file. The interface might take three hours to run, then you GREP the file for results and examine. You'll have to write custom fixtures to do this, and your programming language isn't supported by FitNesse. Or you could write a fixture that takes a SELECT statement to count the number of rows that are generated by the file, run the interface, and compare.
Of course, a programmer is going to have to write the SELECT statement. Is it a valid acceptance test?
Or you could have the number of rows fixture be approximate - "Between 10,000 and 15,000" - customers could write this, and it guarantees that you didn't blow a join, but not much else.
You could write code that accesses the deep guts of the application, turning it sideways to generate a single member at a time, thus speeding up the acceptance test runs to a few seconds. That's great for the feedback loop, but it's more of a unit test than an acceptance test.
You could suggest I re-write the whole thing to use web services, but that introduces testing challenges of an entirely different kind. To be frank, when I have a problem and people suggest that I re-write the whole thing without recognizing that it would present an entirely different set of challenges, it's a sign to me of naiveté.
I submit that all of these would be a significant investment in time and effort for not a whole lot of value generated.
So, I still want to write customer acceptance tests, but I'm not sure this is the right paradigm to do it. I also have a handful of tricks and techniques I have used over the years to make this easier. I will reveal them in a future post, but in the mean time, here's my challenge:
What would you suggest to solve this puzzle?
I should add that I don't think this is a trivial puzzle; at least, more than half of the people of which I ask this give an answer that I believe to be unsatisfactory. Can you do better?
Schedule and Events
March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com