Friday, November 2, 2007

Alt.Net Mailing List - The Hidden Benefits

Since the Alt.Net mailing list saw it's first post on October 8, 2007, the list has seen an average of 87 posts per day. The content is wonderful (now that we aren't arguing about the name so much).
This high volume has also done wonders for my ability to scan and speed read. This is a skill that I had during my schooling but seemed to have lost. Now that I have been following this mailing list, I have seen much more zero-bounce in my blog reader - even though I have more feeds coming in.

Now, I can finally keep up with the volume of content coming from Ayende's blog.

Wednesday, October 31, 2007

Am I Doing Scrum? - Nokia's Benchmark for Scrum Adoption

Dr. Jeff Sutherland, and Agile Manifesto signatory, has posted an excellent interview, "Scrum and Not-Scrum". As part of this interview, he advocates eight questions that Nokia uses to determine if their teams have adopted Scrum. Jeff presents this as a boolean test. If the team can answer "yes" to all eight of these questions, they are doing Scrum. If any question gets a "no," then they have not fully adopted Scrum. Here are the questions:

First Tier:

  • Are you doing iterative development?
    • Do you have a fixed iterations lasting less than six weeks?
  • At the end of the iteration, do you have working software?
  • Can the team effectively start work on an iteration without a detailed specification?
  • Is testing part of the increment?

Second Tier:
  • Do you have a product owner?
  • Does the product owner have a prioritized, estimated product backlog?
  • When the team is developing, do they have a burndown chart?
    • Can you calculate the team's velocity?
  • Is the team self-organizing?
    • In other words, does the team choose, assign, and map the fastest possible way to deliver the work?
    • The project manager cannot interfere with the team during an iteration.

I highly recommend watching this interview. It is only about 20 minutes long, and it is packed with good information including a summary of Google's boiled frog adoption of Scrum on the AdWords project.

Jeff also reminds us that Scrum will not solve your organizational problems, but it will make them painfully obvious.

Tuesday, October 30, 2007

Programmers Anonymous: Confessions of a Terrible Software Developer

Marc has posted his "Confessions of a Terrible Programmer." The overriding tone of the post can be summed up in the following pseudo-Zen quotes:


You will never become a Great Programmer until you acknowledge that you will always be a Terrible Programmer.

and,

You will remain a Great Programmer for only as long as you acknowledge that you are still a Terrible Programmer.

Marc does a very good job of stating how he overcomes his "terribleness" to provide working software. According to Marc, his solutions are doing a good job of hiding the fact that he is a terrible developer. However, I believe that agile practices present different solutions to these problems. In broad terms, Marc favors failing fast where I favor multiple levels of testing, Test-Driven Design, and the fast feedback loops provided by good test coverage coupled with Continuous Integration.

To address more specifics, I have provided a summary of Marc's solutions along with where I think agile solves these in a different way.

  1. Marc says he favors strong typing to prevent problems. Having done most of my work in static languages (Java, C#), and some in dynamic language (Groovy), at this point I prefer solid unit test coverage (100% with excuses). Test-first design helps with this too. Once I have good test coverage, those tests are unearthing the same problems that the compiler would. With the dynamic languages, I find the same errors a bit later, but I get the benefit of code that I find to be much easier to read.

  2. Marc favors programming assertions. Marc is a paranoid programmer. He will assert that something is not null even when he controls both sides of the interface - just in case he might change something later. I personally find that assertions and paranoid programming in general fall under the related headings of YAGNI and nosiy code. Instead, I prefer solid unit testing and a tester that knows how to unearth the edge cases. Write unit tests that assert that the service in question does not return null to box in the behavior.

  3. Marc says he will, "ruthlessly try to break [his] own code." It appears that Marc is trying to accomplish through developer testing what should be done by an actual tester. While I agree that developers should be generating tests that give great code coverage, it is a waste of time to make them switch hats and become a tester for their own code. Hire a tester. They think differently from developers. Their concerns are different.

  4. Marc favors code reviews. I favor pair programming. Both provide feedback, but I want my feedback while I'm "in the zone." I want my feedback immediately. I don't want you to sit back and wait for me to find my own bugs. If you see something, tell me. Tell me as soon as it looks like I've finished typing or as soon as it looks like I'm looking for the bug. This way, I can fix the problem without having to make the context switch to come back to it later. Plus, the quality of the review is better since the other developer should be equally engaged in the generation of the code while it is being generated.


To be honest, our team is not able to follow all of these guidelines at the moment. Our biggest problem is not having a dedicated software tester embedded with the team. We ARE having to generate the types of tests that a dedicated tester should be doing. And, we are missing some things that a tester would catch much earlier in the development process. I feel that this missing component of our team IS hurting our velocity.

Am I a terrible programmer? Yes. If you find me stating otherwise, please redirect me to some of my own code - something that I wrote yesterday should do just fine.

What are you doing to hide the fact that you're a terrible developer? I would love to hear.

Friday, October 26, 2007

How To Pass Command Arguments With a File Type

UPDATE: This issue appears to be fixed with the Groovy 1.5 Windows install package. However, this is still good general information to know about passing command arguments with file associations in Windows.

I just upgraded my groovy install to the latest RC for 1.1, and it quit recognizing command arguments. I tried debugging all kinds of older scripts that I knew worked, and none of them were working anymore. That's when Steve reminded me that the Groovy install messes up the Windows file extension association. So, it installed the .groovy file association like this:

"C:\groovy\bin\groovy.exe" "%1"

And it needed to be changed to this:

"C:\groovy\bin\groovy.exe" "%1" %*

This will allow the other command arguments to be passed when the groovy script is kicked off.

To change the file association (in XP),

* Open My Computer
* Tools -> Folder Options...
* Click the "File Types" tab
* Find the file type you want to change, ".groovy" in this case, and click it
* Click the "Advanced" button
* Click the "open" action
* Click the "Edit..." button
* Edit the content of the text box under, "Application used to perform action:"
* OK / Close your way out of the dialogs.


Obviously, this will work for other file associations. This method, while it forces you through a few windows, allows you to avoid mucking with the registry directly.

Monday, October 22, 2007

Don't Get Fooled

Junk is junk. Whether it comes in your (snail) mailbox or your Inbox, it's junk.

Scott has a post asking how to teach common sense when here. Some will learn, some won't.

People that fall for scams fall for them whether they come by snail mail, telephone, or email Email is the easiest to perpetrate. Therefore, more people are falling for email scams these days.

The unfortunate thing is that I haven't found a way to foist pain back onto those that are populating my Inbox with crap. Actually, my filters are pretty good these days, so I don't see most of it.

With telephone scams, my game is (when I have the spare time) to keep them on the phone for as long as possible. I once kept a scammer on the phone for two hours while he kept trying different ways to get me to tell him my bank account and routing number. At least for those two hours, I knew that THAT guy was not scamming someone's grandmother.

Wednesday, October 10, 2007

Why Are You Asking Me This Question?

Any developer that's been around this industry for any length of time has found himself in a conversation like this one:

Manager: Hey, Mr. Developer, TheBusiness has decided that we need to develop a product to do XYZ. How long will that take?
Developer: Hrm... I'm not sure. What are the details of XYZ?
Mgr: The details aren't important. I just need an estimate
Dev: Well... something like 8 weeks.
Mgr: Okay, well how much longer until you finish the thing you're working on?
Dev: Around 4 weeks
Mgr: Okay. Thanks.

[10 weeks later]

Mgr: So, Dev, I'm looking forward to having this XYZ product. We've already sold it to 50 customers and they're excited to be getting it in two weeks.
Dev: WHAT?!?!?!

This is not a fun series of conversations. The breakdowns in communication should be obvious. The manager (unknowingly) expected way too much precision from the developer's estimate, and the developer gave it without nearly enough qualifying statements pointing out that it's a WAG.

So, how can this be fixed? Many developers that I know say, just don't provide the estimate. While most of us could get away with that, it doesn't necessarily help the conversation. I prefer to answer with, "Why are you asking me this question?" This is a question that I learned to ask by watching a former team lead, Glenn Burnside. Whenever a salesperson or manager would ask the team a question like, "How difficult would it be to...," Glenn's response was always the same - he would ask this question. This always resulted in a conversation that fleshed out a more detailed question (or questions) and much better, more precise answers.

The hardest part of software development is interacting with those objects that have a pulse - not those that have a system clock.

Sunday, October 7, 2007

altnetconf - Cool Tools Don't Make Cool Software

I'm pretty sure that Jimmy Hendrix could make awesome music with a cheap guitar. Likewise, I could drop $5000 on a beautiful vintage guitar and I still won't be going on tour with anyone. The artist makes the tool work. While good tools make the artist's life a bit easier, they don't make the artist.

Likewise, it is perfectly possible to write clear, maintainable software without the use of things like inversion-of-control containers and mock frameworks. This weekend at the Alt.Net conference, I spoke with some very smart developers that don't use these tools that I take for granted. Tools that I even view as a necessary item. One developer that I spoke with stated that he has yet to find a need to use a mocking framework. Instead, he prefers the Testable Object pattern. I was completely taken aback until I grokked what his explanation. While I'm not going to drop my use of easymock, I see that this is certainly a valid way to do things.

I had the opportunity to speak with another developer this weekend - a brilliant developer and the second person ever added to my feed list. When the table at lunch started talking about IoC containers, he said that he's never seen the need for one. I found this to be very surprising. Once a couple of us explained why we like and use tools like StructureMap and Guice, he quickly grokked what was going on, and I think he could see why some find them to be useful tools to have. However, it also became clear that he had found other ways around the pain of supplying dependencies when using Dependency Injection.

All of this is to say that I was reminded that I don't have to pull in every tool under the sun. It's good to keep my head up and take a look at what others are doing to alleviate problems. However, there are also other ways around problems that may not involve pulling in yet one more tool.

It's not the tools that make the software good. It's the developers that apply sound judgement and experience - regardless of what tools they have.

Saturday, October 6, 2007

altnetconf - Random Thoughts on Fishbowl

Some of the sessions are using the fishbowl format. This is a really fun format that lets just about anyone that wants it the opportunity to contribute content to the conversation. However, this did not work very well during the BDD Discussion most likely because code samples were being periodically presented on the projector. This was proof positive that if your turn on a video source, the entire audience will focus on it. In this case, chairs were setup for the fishbowl, but the fishbowl rules were not followed by the audience. Because the group was focused on the screen rather than the chairs and the people in them, the conversation was not controlled.

Either present video or do the fishbowl, but it seems you cannot do both at the same time

altnetconf - Behavior Driven Design

This post is separated into summary and thoughts.

Summary


This thing called "Behavior Driven Design" (BDD) doesn't seem to have a formalized definition. Not everyone in attendance could agree on what "it" is. However, the most clear definition presented involves several components. Scott Bellware asserted that BDD = UserStories + TDD + UbiquitousLanguage + Solubility.

Some constraints on the above components should be explained. User stories must be well-formed. So, the idea is that during a conversation that the business analyst and the developer would work together to produce a story of the form "As a... I want to... So that..." The assumption is made that the TDD being used is driving design at a granular level with only one assertion per test. To extend Scott's definition of Solubility linked to above, another term "grokability" was proposed by Scott Hanselman.

The Behavior Driven Design (BDD) session began with many questions, asked many more, and perhaps answered a few things. Bellware started by saying that BDD is not about testing. He later stated that BDD provides better tests. So, to summarize, BDD is not about testing... except when it is. So, that make that about as clear as mud.

Much of the discussion centered on improving the conversation. More specifically, the conversation between the developer and the business analyst was discussed.

One criticism presented is that BDD is using programming languages to discuss things with the business analyst. The concern was that we would be asking business analysts to write specifications including the markup. This bred a great deal of consternation.

This critique was perhaps diffused later when the suggestion was made that the business analyst would sit down with the programmer, and the two would discuss the user story and the specifications. During that discussion, the developer would transcribe the conversation into the story markup used by the spec tool (RSpec, NBehave...). Another possibility presented was that the conversation could be initially captured onto a story card and then transcribed to the tool by the developer at a later time.

There was the assertion made that BDD provides or formalizes context for conversations with the business analyst and context for the developer to write the implementation.

Scott Hanselman says, "This is like NUnit that generates prose." There seems to be a bit of buy-in for this. Perhaps C# doesn't lend itself to the readability of the spec implementation. The business owner writes this through the ears of the developer that is sitting next to him.

Part of BDD is generating a specification document directly from the tests. This allows the tests and the information being passed back to the business analyst to stay in sync better. This is NOT to replace verbal communication, but again to provide a talking point to augment the conversation.

Thoughts


It seemed as though the thoughts around BDD have application in many parts of the development life-cycle. BDD practices are useful for clarifying the discussions that should be taking place during the entire process. The initial definition of user stories is helped by these practices. During development, BDD practices help developers see the bigger picture. Finally, and importantly, the executable specifications serve as a note to the future self. In other words, when you come back to the code, it is easier to understand why the test was written in the first place.

Much of the confusion came about because these thoughts are useful in so many different places. It seemed that participants saw the potential to have certain pains alleviated, then latched onto that particular portion of usefulness. BDD is aspect-oriented. It is a cross-cutting concern and does not simply fit into a single portion of the development process. Wherever there is conversation in the development process, formalizing that conversation helps to remove ambiguity.

Perhaps this is more useful when you MUST generate documentation. Not everyone needs to have everything generated in paper form. For some business, people, they feel better seeing the specifications generated as either a sheet of paper or a web site. Even in situations where this documentation must be generated, this documentation is not set in stone.

I'm pretty convinced that we don't need a separate tool to do effective Behavior Driven Design. We need to get much better at language - both natural language and expressing natural language with programming languages. As if the natural language wasn't ambiguous enough, we are forced to complicate it by writing something that the parser can understand.

The conversation about this is still in process. I'm sure that I don't know everything about this. I hope to continue to look in this area and find ways to write better software with less pain.

Friday, October 5, 2007

altnetconf - Alt.Net First Impressions

I'm not sure how much I will find myself around the computer this weekend, but I thought I would post my impressions of the ALT.Net conference as we go along. The comments during the opening remarks and fishbowl seemed to center around a few core concepts:


  • Though perceptions might be otherwise, nobody expressed total contempt or hatred of Microsoft.

  • Everyone wants to write better software.

  • Writing better software probably means using tools that weren't developed in Redmond.

  • The overriding of "better software" seems to be maintainability.

  • Drag-and-drop is still the whipping boy of this crowd.

  • Alt.Net IS alternative because most .Net developers seem to use whatever tools Microsoft puts out (and MSDN talks about). Conference attendees are much more likely to use tools that are alternatives to simply the set put out by Microsoft.


I'm looking forward to hearing more opinions and thoughts as we go along. I'm not sure how much will be decided. Many seem to want to come to a consensus, but everyone in attendance is so passionate that it may not happen. We'll see. At least we're trying to figure out what is going on and how we can deliver better software for all involved.

Wednesday, October 3, 2007

Don't Use Mocks...

...when you aren't really interested in testing the interactions. This is related to my previous post regarding mocks and stubs. If leaving off the verification of mocks wouldn't change the nature of what you are testing, then you aren't writing an interaction-based test. Therefore, the mock dependency probably creates more noise than simply using a simple stub. If your dependency is not behind an interface, you can in most cases still create a stub using subclass and override.

While writing:


Dependency dependency = createMock(Dependency.class);
expect((Foo)dependency.bar(anyObject())).andReturn(somethingValid);
ThingToTest target = new ThingToTest(dependency);


isn't too bad. It still causes more noise than:


ThingToTest target = new ThingToTest(new DependencyStub());


Most of the time, I see that stub and grok the fact that in the context of this test, I really don't care what the Class under test does with that dependency.

Saturday, September 29, 2007

Write better tests using a combination of mocks and stubs.

I love using EasyMock. It's just plain, well, easy to use most of the time. Most of the time, it doesn't get in my way. However, there are certain times when I will avoid using a mock object in a test. This is best show by example.

As background, remember that mocks are not stubs and stubs are not mocks. You can use a mocking framework (like easymock) to create stubs, and you can even create mocks by hand (with a good deal of pain). Roy Osherove has done a great job of describing the differences between mocks and stubs. You should check out Roy's explanation. This is a different explanation from what I've seen before. If you want a good, detailed description of the differences, Martin Fowler has provided an in-depth comparison between the two including code samples. You should take a look at Fowler's explanation to understand the terminology.

I won't rehash the explanations. What I will say is that I've found value in doing both interaction-based testing (with mocks) and state-based testing (with stubs). When your classes are well decomposed and decoupled, mock testing makes more sense in most cases. However, when I'm interested in how a particular piece of code affects state on objects that pass through it, I will use a stub.

Here's the promised example. Supposed a I have a class, we'll call Processor that interacts with a dependency that sits behind and interface called EmailService. I'm interested in testing that Processor.emailManagers("we have a problem") can create an instance of Email with the correct properties set and hand that off to the EmailService. Looking at the test first, we have:

@Test
public void myTest() {
EmailServiceStub serviceStub = new EmailServiceStub();
Processor processor = new Processor(serviceStub);
processor.emailManagers("This should be working");

email = serviceStub.getLastSentEmail();
assertNotNull(email);
assertEquals("This should be working", email.getMessage());
assertEquals("managers@FooInc.com", email.getTo());
assertEquals("An Automated Message from 'The System'", email.getSubject());
assertEquals("TheSystem@FooInc.com", email.getFrom());
}

Obviously, we'll need something to stand in the place of the EmailService so that we don't actually send an email in the course of running this unit test. The stand-in for this test is done by a stub like this:

public class EmailServiceStub implements EmailService {
Email lastSentEmail = null;

public void send(Email email) {
this.lastSentEmail = email;
}

public Email getLastSentEmail() { return this.lastSentEmail; }
}


The actual implementation code for Processor then might look like this:

public class Processor {
EmailService emailService;
public Processor (EmailService emailService) { this.emailService = emailService}

public emailManagers(String message) {
Email email = new Email();
email.setBody(message);
email.setSubject("An Automated Message from 'The System'");
email.setTo("managers@FooInc.com");
email.setFrom("TheSystem@FooInc.com");

emailService.send(email);
}
}



To be fair, this kind of testing is possible with EasyMock as well. You CAN check the properties of the object that are passed to send() using EasyMock. However, I find it incredibly painful to do so. What's worse, it makes the test convoluted to read. EasyMock would have you define your own implementation of IArgumentMatcher as indicated here. (search for "Defining own Argument Matchers"). Blech! We've done this before, and it seems to make the tests much more difficult to grok when I come back to them.

There are plenty of good examples of when you would want to use mocks over stubs. The Fowler post linked to at the top is a good start. I still use mocks for most of my testing, but in this instance, it's easier to use a stub than battle EasyMock.

Wednesday, September 26, 2007

Convention Over Configuration is Nice... Except When It Isn't

Convention over configuration should lead to predictability. This breaks down when the default behavior of a system is unexpected. Take Guice for example. Guice will automatically bind to a class if the only constructor is a zero-argument constructor. This will happen even if the constructor is private.

This is not the path of least surprise when I forget to configure the behavior. I don't mind there being default behavior, I would just like to be able to turn it off. Moreover, I would like the opportunity to turn certain features off.

Private should mean private at least.

Tuesday, September 11, 2007

Warning: Frustrating Toolsets Ahead

The opinions expressed here are in the context of working on relatively short-term projects against a legacy code base. If this is not your situation, adjust or ignore as appropriate.

I love tools. I'm a huge believer in using the right tool for the job. Perhaps this is because I can use that crutch as a good excuse to hit the local big orange box when I start a new project. Regardless, I've found that tools that some have labeled "frustrating" have been downright helpful and productive in our projects.

Last week, Scott pointed out that perhaps user stories don't make good test artifacts. I'm inclined to agree. Jeremy has also reminded us in the value of testing small before testing big here. While I see the value of this, I don't agree that the small (unit) tests all need to be written before the larger (integration) tests.

To provide a bit of background, my team largely works on writing Java applications that import data from a variety of sources into the existing system at DrillingInfo. To be clear, we don't work much with the web front-end. Our main job is bringing new data sources on-line and making the data make sense within our existing domain model. Therefore, the internal customers are only tangentially concerned with the appearance of our final product. They are really more concerned that data makes it into the system in a way that fits the existing system. They might even express their desires in terms of how new data should appear on the web site. Those requirements are then (roughly) translated into story cards and the story cards are reviewed by the team and the customer.

With that background, I say that the story cards take a relatively minor role in driving day-to-day development. At the start of any iteration, the story cards are used to spin out tasks. Some of the tasks are features (the process should convert incoming data with a value of 'x' to the value of 'y'), or they could be technical (setup the demo server). Then, the team estimates complexity for the individual tasks. During an iteration, we look at the task cards -- not the story cards.

On a day-to-day basis, we tend to use Fitnesse to provide direction for the unit tests that need to be written. Our goal when looking at a new feature is to have a pair sit down and sketch out a Fitnesse test. Once that has been done, we'll sit down with the customer and make sure that the behavior outlined in the Fitnesse is what they expect. Then, we right the fixture code to clear all of the Method Not Found exceptions that the Fitnesse test initially causes. At this point we have a nice, red, failing test for a feature.

After getting the large test in place (and failing), we start looking at where the system needs to be extended, perhaps do a little whiteboarding, and start writing unit tests. The failing Fitnesse test helps keep us on target. Quite often, one of the developers in the pair will have to say, "now, what Fit test are we trying to make pass?" Just like with unit testing, we sometimes find other behaviors that need to be specified. Usually one half of the pair will pull off and capture enough information on a task card so that we don't forget it.

The shortcomings of Fitnesse have been well articulated (here and here and here). I'll say that the one I find to be most frustrating is lack of refactoring support. When I change the wording in my tests, I have to go change method names. But, since our projects are relatively small (1 - 2 months), we don't end up with so many Fitnesse tests that this is a large problem.

Because we use the Fitnesse tests to drive discussions with our customers, nobody has ever come back and said, "well, the tests that you've got in place are great, but the system isn't really doing what I want." While looking at Fitnesse, the customer does not forget what they wanted, and they often remember things they forgot to ask for initially. The story cards are not useless, they just don't get used much once the team gets going.

All in all, I'd say that we hit a pretty good flow of writing Fit tests, having conversations, writing unit tests, and making everything green. It's certainly possible that on a longer-term project with more features, we would find ourselves frustrated by the tools that we currently find to be productive.

Monday, August 13, 2007

Using Groovy to grep XML

After attending some compelling presentations by Scott Davis at No Fluff Just Stuff, I have been playing with Groovy here and there when I've gotten the chance. At work, we've been working with a some software that's currently producing a pretty massive log file. We tried using Chainsaw to slice and dice it, but it wasn't giving us the functionality that we wanted. So, this was a perfect time to play with some Groovy.

Our input looks something like this:

<root>
<entry level="ERROR">
<message>
An error has occurred while parsing column FOO with value of BAR 23 in row 234
</message>
</entry>
<entry level="ERROR">
<message>
An error has occurred while parsing column FOO with value of BAR 52 in row 234
</message>
</entry>
<entry level="ERROR">
<message>
An error has occurred while parsing column FOO with value of FOOBAR 34 in row 234
</message>
</entry>
<entry level="ERROR">
<message>
An error has occurred while parsing column FOO with value of FOO52 in row 234
</message>
</entry>
</root>


Of course, the file is too massive to read through. For this particular error, we were interested in the unique values of FOO that we weren't handling. Here is the groovy to pop open the XML file and find the unique values


def findUniqueEntries(String inputPath, String search, String extract) {
def uniqueMatches = new HashMap()

//Input all of the XML
def root = new XmlParser().parse(new File(inputPath))

//All the child nodes of the root node will be elements
for (entry in root.children()) {

//Assumes each has exactly one child
text = entry.message[0].text()

//Do a substring search first
if (text.contains(search)) {

//Strip out the failing value with a Regular Expression
def matcher = text =~ extract
uniqueValue = matcher[0][1]

//For values we've seen before, increment the count
if (uniqueMatches[uniqueValue] != null) {
uniqueMatches[uniqueValue] += 1
}

//For a new value, initialize the count
else {
uniqueMatches[uniqueValue] = 1
}
}
}

//Print the values along with their occurance count
for (match in uniqueMatches) {
println match
}

//Print the number of unique matches, and the number of total matches.
def uniqueMatchCount = uniqueMatches.size()
def totalMatchCount = uniqueMatches.values().sum()
println ('\nFound ' + uniqueMatchCount + ' unique matches in '
+ totalMatchCount + ' total matches.\n')
}


There are a couple of interesting things that made this really fun code to write:

1 - Navigating XML with Groovy is easy, and the syntax reads quite well. The code communicates the structure of the XML document as well as could be expected, I think.

2 - Groovy makes it really easy to work with Regular Expressions. No more pattern compiling.

3. The Groovy sum() extension means that we don't need to track the total number of matches, nor do we need to iterate through the HashMap at the end.

All in all, I'm enjoying playing with Groovy at the moment. I've found the barrier to entry to be pretty low. It will be interesting to see how we continue to use Groovy in the future.

Throw Away Code Must Be Thrown Away

From time to time, it’s advantageous to take off my TDD hat a fling a small bit of code. I’ve found this is the quickest way to gain a bit of confidence in working with libraries that I haven’t touched before. This allows me to be sure that I know how to interface with the functionality that I need. After a quick proof-of-concept, I’ll know what parameters and classes are needed to get what I want.

My own personal problem with this has been developing the discipline to remove the code from the system once I’ve figured out what I’m after. Why do I think I need to discipline myself to do this? Simply put, my worst code is always the code that wasn’t written “test-first.” The spikes that I pull over into production code often cause problems when trying to get them under test coverage.

Conversely, I’ve found that following Test Driven Development yields code that is easier to test and frankly, better designed. TDD prevents a great deal of speculative development and over design. Therefore, it’s much better to step back from the initial spike and start over by writing tests.

I have found it much easier to prevent the spikes from getting attached to the project by putting them in a completely separate class. Name it “class ThrowAway” to help yourself remember.

Bottom line: Throw-away code must be thrown away.