Tuesday, September 11, 2007

Warning: Frustrating Toolsets Ahead

The opinions expressed here are in the context of working on relatively short-term projects against a legacy code base. If this is not your situation, adjust or ignore as appropriate.

I love tools. I'm a huge believer in using the right tool for the job. Perhaps this is because I can use that crutch as a good excuse to hit the local big orange box when I start a new project. Regardless, I've found that tools that some have labeled "frustrating" have been downright helpful and productive in our projects.

Last week, Scott pointed out that perhaps user stories don't make good test artifacts. I'm inclined to agree. Jeremy has also reminded us in the value of testing small before testing big here. While I see the value of this, I don't agree that the small (unit) tests all need to be written before the larger (integration) tests.

To provide a bit of background, my team largely works on writing Java applications that import data from a variety of sources into the existing system at DrillingInfo. To be clear, we don't work much with the web front-end. Our main job is bringing new data sources on-line and making the data make sense within our existing domain model. Therefore, the internal customers are only tangentially concerned with the appearance of our final product. They are really more concerned that data makes it into the system in a way that fits the existing system. They might even express their desires in terms of how new data should appear on the web site. Those requirements are then (roughly) translated into story cards and the story cards are reviewed by the team and the customer.

With that background, I say that the story cards take a relatively minor role in driving day-to-day development. At the start of any iteration, the story cards are used to spin out tasks. Some of the tasks are features (the process should convert incoming data with a value of 'x' to the value of 'y'), or they could be technical (setup the demo server). Then, the team estimates complexity for the individual tasks. During an iteration, we look at the task cards -- not the story cards.

On a day-to-day basis, we tend to use Fitnesse to provide direction for the unit tests that need to be written. Our goal when looking at a new feature is to have a pair sit down and sketch out a Fitnesse test. Once that has been done, we'll sit down with the customer and make sure that the behavior outlined in the Fitnesse is what they expect. Then, we right the fixture code to clear all of the Method Not Found exceptions that the Fitnesse test initially causes. At this point we have a nice, red, failing test for a feature.

After getting the large test in place (and failing), we start looking at where the system needs to be extended, perhaps do a little whiteboarding, and start writing unit tests. The failing Fitnesse test helps keep us on target. Quite often, one of the developers in the pair will have to say, "now, what Fit test are we trying to make pass?" Just like with unit testing, we sometimes find other behaviors that need to be specified. Usually one half of the pair will pull off and capture enough information on a task card so that we don't forget it.

The shortcomings of Fitnesse have been well articulated (here and here and here). I'll say that the one I find to be most frustrating is lack of refactoring support. When I change the wording in my tests, I have to go change method names. But, since our projects are relatively small (1 - 2 months), we don't end up with so many Fitnesse tests that this is a large problem.

Because we use the Fitnesse tests to drive discussions with our customers, nobody has ever come back and said, "well, the tests that you've got in place are great, but the system isn't really doing what I want." While looking at Fitnesse, the customer does not forget what they wanted, and they often remember things they forgot to ask for initially. The story cards are not useless, they just don't get used much once the team gets going.

All in all, I'd say that we hit a pretty good flow of writing Fit tests, having conversations, writing unit tests, and making everything green. It's certainly possible that on a longer-term project with more features, we would find ourselves frustrated by the tools that we currently find to be productive.

No comments: