November 23, 2024, Saturday, 327

Test Flexibly With AspectJ And Mock Objects

From NeoWiki

Revision as of 10:49, 5 March 2007 by Neo (Talk | contribs)
Jump to: navigation, search
Enhance unit testing with test-only behavior

Nicholas Lesiecki, Principal software engineer, eBlox, Inc.

01 May 2002

Programmers who have incorporated unit testing into their development process know the advantages it brings: cleaner code, courage to refactor, and higher velocity. But even the most die-hard unit testers can falter when faced with testing a class that relies on system state for its behavior. Nicholas Lesiecki, a respected Java programmer and leader in the XP community, introduces the problems surrounding test-case isolation and shows us how to use mock objects and AspectJ to develop precise and robust unit tests.


The recent attention to Extreme Programming (XP) has spilled over onto one of its most portable practices: unit testing and test-first design. As software shops have adopted XP's practices, many developers have seen the increase in quality and speed that comes from having a comprehensive unit-test suite. But writing good unit tests takes time and effort. Because each unit cooperates with others, writing a unit test can involve a significant amount of setup code. This makes tests more expensive, and in certain cases (such as code that acts as a client to a remote system) such tests can be almost impossible to implement.

In XP, unit tests complement integration and acceptance tests. These latter two test types may be undertaken by a separate team or as a separate activity. but unit tests are written simultaneously with the code to be tested. Facing the pressure of an impending deadline and a headache-inducing unit test, it is tempting to write a haphazard test or not bother with the test at all. Because XP relies on positive motivation and self-sustaining practices, it's in the best interest of the XP process (and the project!) to keep the tests focused and easy to write.

Tools clipart.png Tip: Required background
This article focuses on unit testing with AspectJ, so the article assumes you are familiar with basic unit testing techniques. If you aren't familiar with AspectJ, it would probably help to read my introduction to AspectJ before you go any further (see Resources). The AspectJ techniques presented here aren't very complex, but aspect-oriented programming requires a little getting used to. In order to run the examples, you will need to have Ant installed on your test machine. You do not, however, need any special Ant expertise (beyond what's required for a basic install) to work with the examples. For more information or to download Ant, see the Resources section.

Mock objects can help to solve this dilemma. Mock object tests replace domain dependencies with mock implementations used only for testing. This strategy does, however, present a technical challenge in certain situations, such as unit testing on remote systems. AspectJ, an aspect-oriented extension to the Java language, can take unit testing the rest of the way by allowing us to substitute test-only behavior in areas where traditional object-oriented techniques would fail.

In this article we'll examine a common situation where writing unit tests is both difficult and desirable. We'll start by running a unit test for the client component of an EJB-based application. We'll use the example as a springboard to discuss some of the problems that can arise with unit testing on remote client objects. To solve these problems, we'll develop two new test configurations that rely on AspectJ and mock objects. By the end of the article you should have an appreciation of common unit-testing problems and their solutions, as well as a window into some of the interesting possibilities afforded by AspectJ and mock object testing.

In order to follow the code samples we'll be working with throughout this article, you might want to install the example application now.

Contents

A unit testing example

The example consists of a test for an EJB client. Many of the issues raised in this case study could also be applied to code that calls Web services, JDBC, or even a "remote" part of the local application through a facade.

The server-side CustomerManager EJB performs two functions: it looks up the names of customers and registers new customer names with the remote system. Listing 1 shows the interface that CustomerManager exposes to its clients:

Listing 1. CustomerManager's remote interface

public interface CustomerManager extends EJBObject {

  /**
   * Returns a String[] representing the names of customers in the system
   * over a certain age.
   */
  public String[] getCustomersOver(int ageInYears) throws RemoteException;

  /**
   * Registers a new customer with the system. If the customer already 
   * exists within the system, this method throws a NameExistsException.
   */
  public void register(String name) 
    throws RemoteException, NameExistsException;
}

The client code, called ClientBean, essentially exposes the same methods, delegating their implementation to the CustomerManager, as shown in Listing 2.

Listing 2. The EJB client code

public class ClientBean {
  private Context initialContext;
  private CustomerManager manager;

  /**
   * Includes standard code for referencing an EJB.
   */
  public ClientBean() throws Exception{
    initialContext = new InitialContext();
    Object obj = initialContext.lookup("java:comp/env/ejb/CustomerManager");
    CustomerManagerHome managerHome = (CustomerManagerHome)obj;

    /*Resin uses Burlap instead of RMI-IIOP as its default
     *  network protocol so the usual RMI cast is omitted.
     *  Mock Objects survive the cast just fine.
     */
    manager = managerHome.create();
  }

  public String[] getCustomers(int ageInYears) throws Exception{
    return manager.getCustomersOver(ageInYears);
  }

  public boolean register(String name) {
    try{
      manager.register(name);
      return true;
    }
    catch(Exception e){
      return false;
    }
  }
}

I've kept this unit deliberately simple so that we can focus on the test. The ClientBean's interface differs only slightly from the CustomerManager's interface. Unlike the ClientManager, the ClientBean's register() method returns a boolean and does not throw an exception if the customer already exists. These are the functions that a good unit test should verify.

The code shown in Listing 3 implements the test for ClientBean with JUnit. There are three test methods, one for getCustomers() and two for register() (one for success and one for failure). The test presumes getCustomers() will return a 55-item list and register() will return false for EXISTING_CUSTOMER and true for NEW _CUSTOMER.

Listing 3. The unit test for ClientBean

// [...standard JUnit methods omitted...]

public static final String NEW_CUSTOMER = "Bob Smith";
public static final String EXISTING_CUSTOMER = "Philomela Deville";
public static final int MAGIC_AGE = 35;

public void testGetCustomers() throws Exception {
  ClientBean client = new ClientBean();
  String[] results = client.getCustomers(MAGIC_AGE);
  assertEquals("Wrong number of client names returned.", 55, results.length);
}

public void testRegisterNewCustomer() throws Exception{
  ClientBean client = new ClientBean();
  //register a customer that does not already exist
  boolean couldRegister = client.register(NEW_CUSTOMER);
  assertTrue("Was not able to register " + NEW_CUSTOMER, couldRegister);
}

public void testRegisterExistingCustomer() throws Exception{
  ClientBean client = new ClientBean();

  // register a customer that DOES exist
  boolean couldNotRegister = ! client.register(EXISTING_CUSTOMER);
  String failureMessage = "Was able to register an existing customer ("
                          + EXISTING_CUSTOMER + "). This should not be " +
                          "possible."
  assertTrue(failureMessage, couldNotRegister);
}

If the client returns the expected result, the tests will pass. While this test is very simple, you can easily imagine how the same procedure would apply to a more complex client, such as a servlet that generates output based on calls to the EJB component.

If you've already installed the sample application, try running this test a few times with the command ant basic in the example directory.

Tools clipart.png Tip: Higher-level testing
This article focuses on unit testing, however integration or functional tests are just as important to rapid development and high quality. In fact, the two types of test complement each other. Higher-level tests validate the system's end-to-end integrity while low-level unit tests validate the individual components. Both are useful in different situations. For instance, a functional test might pass while a unit test ferrets out a bug that occurs only in rare circumstances. Or vice versa: the unit tests could pass while the functional tests reveal that the individual components aren't wired together correctly. With functional tests it can make more sense to do data-dependent testing, as the goal is to verify the aggregate behavior of the system.

Problems with data-dependent testing

After you've run the above test a few times you will notice inconsistent results: sometimes the tests pass, sometimes they don't. This inconsistency is due to the EJB component's implementation -- not the client's. The EJB component in the example simulates an uncertain system state. Inconsistencies in test data pose a real problem when it comes to implementing simple, data-centric testing. Another big problem is the tendency to duplicate test efforts. We'll address both of these issues here.

Data management

The easy way to overcome uncertainties in the data is to manage the state of the data. If we could somehow guarantee that there were 55 customer records in the system before we ran our unit test, we could be sure that any failures in our getCustomers() test would indicate flaws in our code, rather than data issues. But managing the state of the data introduces its own set of problems. Before each test runs, you have to ensure that the system is in the correct state for that particular test. If you're not vigilant, the results of one test can change the system's state in such a way that the next test will fail.

To cope with this burden you can use shared setup classes or a batch-input process. But both these approaches represent a significant investment in infrastructure. If your application persists its state to some type of storage, you may be in for further problems. Adding data to the storage system could be complicated, and frequent insertions and deletions could slow test execution.

Worse than encountering problems with state management is encountering a situation where such management is downright impossible. You may find yourself in this sort of situation when testing client code for a third-party service. Read-only type services might not expose the ability to change the system state, or you may be discouraged from inserting test data for business reasons. For instance, it's probably a bad idea to send a test order to a live processing queue.

Duplicated effort

Even if you have complete control over the system state, state-based testing can still produce an unwanted duplication of test effort -- and you don't want to write the same test twice.

Let's take our test application as an example. If I control the CustomerManager EJB component, I presumably already have a test that verifies that it behaves correctly. My client code doesn't actually perform any of the logic involved in adding a new customer to the system; it simply delegates the operation to the CustomerManager. So, why should I retest the CustomerManager here?

If someone changes the implementation of CustomerManager so that it gives different responses to the same data, I would have to alter two tests in order to track the change. This smells of an overcoupling of tests. Fortunately, this duplication is unnecessary. If I can verify that ClientBean is communicating correctly with the CustomerManager, I have sufficient confirmation that ClientBean is working as it should. Mock object testing allows you to perform exactly this sort of verification.

Mock object testing

Mock objects keep unit tests from testing too much. Mock object tests replace real collaborators with mock implementations. And mock implementations allow easy verification that the tested class and the collaborator are interacting correctly. I'll demonstrate how this works with a simple example.

The code we're testing deletes a list of objects from a client-server data management system. Listing 4 shows the method we're testing:

Listing 4. A test method

public interface Deletable {
  void delete();
}

public class Deleter {
  public static void delete(Collection deletables){
    for(Iterator it = deletables.iterator(); it.hasNext();){
      ((Deletable)it.next()).delete();
    }
  }
}

A naive unit test might create an actual Deletable and then verify that it disappeared after calling Deleter.delete(). To test the Deleter class using mock objects, however, we write a mock object that implements Deletable, as shown in Listing 5:

Listing 5. A mock object test

public class MockDeletable implements Deletable{
  private boolean deleteCalled;

  public void delete(){
    deleteCalled = true;
  }

  public void verify(){
    if(!deleteCalled){
      throw new Error("Delete was not called.");
    }
  }
}

Next, we use the mock object in Deleter's unit test, as shown in Listing 6:

Listing 6. A test method that uses a mock object

public void testDelete() {
  MockDeletable mock1 = new MockDeletable();
  MockDeletable mock2 = new MockDeletable();

  ArrayList mocks = new ArrayList();
  mocks.add(mock1);
  mocks.add(mock2);

  Deleter.delete(mocks);

  mock1.verify();
  mock2.verify();
}

Upon execution, this test verifies that Deleter successfully called delete() on each of the objects in the collection. In this manner, mock object tests precisely control the surroundings of the tested class and verify that the unit interacts with them correctly.

The limitations of mock objects

Object-oriented programming limits the influence of mock object tests on the execution of the tested class. For instance, if we were testing a slightly different delete() method -- perhaps one that looked up a list of deletable object before removing them -- our test could not supply mock objects so easily. The following method would be difficult to test using mock objects:

Listing 7. A method that would be hard to mock

public static void deleteAllObjectMatching(String criteria){
  Collection deletables = fetchThemFromSomewhere(criteria);
  for(Iterator it = deletables.iterator(); it.hasNext();){
    ((Deletable)it.next()).delete();
  }
}

Proponents of the mock object testing method claim that a method such as the one above should be refactored in order to make it more "mock friendly." Such refactoring often leads to a cleaner, more flexible design. In a well-designed system, each unit interacts with its context through well-defined interfaces that support a variety of implementations, including mock implementations.

But even in well-designed systems, there are cases where a test cannot easily influence context. This occurs whenever code calls on some globally accessible resource. For instance, calls to static methods are difficult to verify or replace, as is object instantiation using the new operator.

Mock objects can't help with global resources because mock-object testing relies on the manual replacement of domain classes with test classes that share a common interface. Because static method calls (and other types of global resource access) cannot be overridden, calls to them cannot be "redirected" the way instance methods can.

You can pass in any Deletable to the method in Listing 4; however, short of loading a different class in the place of the real thing, you cannot replace a static method call with a mock method call using the Java language.

A refactoring example

Often some refactoring can steer your application code toward an elegant solution that's also easily testable -- but this isn't always the case. Refactoring to enable testing does not make sense if the resulting code is harder to maintain or understand.

EJB code can be particularly tricky to refactor into a state that allows easy mock testing. For instance, one type of mock-friendly refactoring would change the following sort of code:

// in EJBNumber1
public void doSomething(){
  EJBNumber2 collaborator = lookupEJBNumber2();
  // do something with collaborator
}

into this sort:

public void doSomething(EJBNumber2 collaborator){
  // do something with collaborator
}

In a standard object-oriented system, this refactoring example increases flexibility by allowing callers to provide collaborators to a given unit. But such refactoring could be undesirable in an EJB-based system. For performance reasons, remote EJB clients need to avoid as many remote method calls as possible. The second approach requires that a client first look up and then create an instance of EJBNumber2, a process that involves several remote operations.

In addition, well-designed EJB systems tend toward a "layered" approach, where the client layer does not necessarily know about implementation details such as the existence of EJBNumber2. The preferred means of getting an EJB instance is to look up a factory (the Home interface) from a JNDI context, and then call a creation method on the factory. This strategy gives EJB applications much of the flexibility intended by the refactored code sample. Because application deployers can swap in a completely different implementation of EJBNumber2 at deployment time, the behavior of the system can be easily adjusted. JNDI bindings, however, cannot be easily changed at run time. Therefore, mock object testers are faced with the choice of redeploying in order to swap in a mock for EJBNumber2 or abandoning the entire testing model.

Fortunately, AspectJ offers a workaround.

AspectJ adds flexibility

AspectJ can provide context-sensitive behavior modification on a per-test-case basis -- even in situations that would usually prohibit the use of mock objects. AspectJ's join-point model allows a module, called an aspect, to identify points in a program's execution (such as looking up an object from a JNDI context) and to define code that executes at those points (such as returning a mock object instead of proceeding with the lookup).

Aspects identify points in a program's control flow through pointcuts. A pointcut picks out a set of points in a program's execution (called joinpoints in AspectJ parlance) and allows an aspect to define code that runs relative to those joinpoints. With a simple pointcut, we can select all JNDI lookups whose parameters match a certain signature. But whatever we do, we must ensure that our test aspects only affect lookups that occur in the test code. To do this we can use the cflow() pointcut. cflow picks out all points in a program's executions that occur within the context of another joinpoint.

The following code fragment shows how our example application could be modified to use a cflow-based pointcut.

pointcut inTest() : execution(public void ClientBeanTest.test*());

/*then, later*/ cflow(inTest()) && //other conditions

These lines define the test context. The first line gives the name inTest() to the set of all method executions in the ClientBeanTest class that return nothing, have public access, and begin with the word test. The expression cflow(inTest()) picks out all joinpoints that occur between the beginning of such a method execution and its return. So, cflow(inTest()) means "while test methods in ClientBeanTest execute."

The sample application's test suite can be built in two different configurations, each using different aspects. The first configuration replaces the real CustomerManager with a mock object. The second configuration does not replace the objects but selectively replaces calls made to the EJB component by the ClientBean. In both cases the aspects manage the show, ensuring that the client receives predictable results from the CustomerManager. By checking for these results ClientBeanTest can ensure that the client uses the EJB component correctly.

Using an aspect to replace EJB lookups

The first configuration, shown in Listing 8, applies an aspect called ObjectReplacement to the example application. It operates by replacing the results of any calls made to the Context.lookup(String) method.

This approach allows the test case to run in an environment where the JNDI configuration expected by the ClientBean is not readily available, that is, from the command line or a simple Ant environment. Your test cases can execute before your EJBs are deployed (or even before they are written). If you depend on a remote service outside of your control, your unit tests can operate regardless of whether it would be acceptable to use the actual service in a testing context.

Listing 8. The ObjectReplacement aspect

import javax.naming.Context;

public aspect ObjectReplacement{
  /**
   * Defines a set of test methods.
   */
  pointcut inTest() : execution(public void ClientBeanTest.*());

  /**
   * Selects calls to Context.lookup occurring within test methods.
   */
  pointcut jndiLookup(String name) :
    cflow(inTest()) &&
    call(Object Context.lookup(String)) &&
    args(name);

  /**
   * This advice executes *instead of* Context.lookup
   */
  Object around(String name) : jndiLookup(name){
    if("java:comp/env/ejb/CustomerManager".equals(name)){
      return new MockCustomerManagerHome();
    }
    else{
      throw new Error("ClientBean should not lookup any EJBs " + "except CustomerManager");
    }
  }
}

The pointcut jndiLookup uses the pointcuts discussed earlier to identify relevant calls to Context.lookup(). After we define the jndiLookup pointcut, we can define code that executes instead of the lookup.

About "advice"

AspectJ uses the term advice to describe code that executes at a joinpoint. The ObjectReplacement aspect employs one piece of advice (highlighted in blue above). The advice essentially says "when you encounter a JNDI Lookup, return a mock object instead of proceeding with the method call." Once the mock object has returned to the client, the aspect's work is done, and the mock objects take over. The MockCustomerManagerHome (standing in for the real home object) simply returns a mocked version of the customer manager from any call to its create() method. Because the mock must implement the home interface in order to legally enter the program at the correct point, the mock also implements all of the methods of the CustomerHome's superinterface, EJBHome, as shown in Listing 9.

Listing 9. MockCustomerManagerHome

public class MockCustomerManagerHome implements CustomerManagerHome{
  public CustomerManager create() throws RemoteException, CreateException {
    return new MockCustomerManager();
  }

  public javax.ejb.EJBMetaData getEJBMetaData() throws RemoteException {
    throw new Error("Mock. Not implemented.");
  }

//other super methods likewise
[...]

The MockCustomerManager is straightforward. It also defines stub methods for superinterface operations, and provides simple implementations of the methods that ClientBean uses, as shown in Listing 10.

Listing 10. Mocked methods on MockCustomerManager

public void register(String name) NameExistsException {
  if( ! name.equals(ClientBeanTest.NEW_CUSTOMER)){
    throw new NameExistsException(name + " already exists!");
  }
}

public String[] getCustomersOver(int years) {
  String[] customers = new String[55];
  for(int i = 0; i < customers.length; i++){
    customers[i] = "Customer Number " + i;
  }
  return customers;
}

As far as mocks go, this one ranks with the unsophisticated. Mature mock objects provide hooks that allow tests to easily customize their behavior. For the purposes of the example, however, I've kept the mock's implementation as simple as possible.

Using an aspect to replace calls to an EJB component

Skipping the EJB deployment phase may ease your development somewhat, but there are advantages to testing your code in an environment that replicates its final destination as closely as possible. Fully integrating your application and running the tests against the deployed application (replacing only those bits of context that are absolutely critical to the test) can flush out configuration issues early. This is the philosophy behind Cactus, an open source, server-side testing framework (see "Plugging in Cactus)."

One configuration of the example application below uses Cactus to execute its tests in the application server. This allows the test to verify that the ClientManager EJB has been correctly configured and can be accessed by other components within the container. AspectJ can also complement this style of semi-integrated testing by focusing its replacement capabilities on the behavior that the test needs, leaving the rest of the component undisturbed.

The CallReplacement aspect begins with an identical definition of the testing context. It goes on to specify pointcuts corresponding to the getCustomersOver() and register() methods, as shown in Listing 11:

Listing 11. Selecting test calls to the CustomerManager

public aspect CallReplacement{
  pointcut inTest() : execution(public void ClientBeanTest.test*());

  pointcut callToRegister(String name) :
    cflow(inTest()) &&
    call(void CustomerManager.register(String)) &&
    args(name);

  pointcut callToGetCustomersOver() :
    cflow(inTest()) &&
    call(String[] CustomerManager.getCustomersOver(int));

// [...]

The aspect then defines around advice on each of the relevant method calls. When a call to getCustomersOver() or register() occurs within ClientBeanTest, the relevant advice will execute instead, as shown in Listing 12:

Listing 12. Advice replaces method calls within the test

void around(String name) throws NameExistsException: callToRegister(name) {
  if(!name.equals(ClientBeanTest.NEW_CUSTOMER)){
    throw new NameExistsException(name + " already exists!");
  }
}

Object around() : callToGetCustomersOver() {
  String[] customers = new String[55];
  for(int i = 0; i < customers.length; i++){
    customers[i] = "Customer Number " + i;
  }
  return customers;
}

This second configuration simplifies our test code somewhat (note that we do not need separate mock classes or stubs for unimplemented methods).

Pluggable test configuration

AspectJ allows you to switch between these two configurations at the drop of a hat. Because aspects can affect classes that have no knowledge of them, specifying a different set of aspects at compile time can result in a system that behaves completely differently at run time. The sample application takes advantage of this. The two Ant targets that build call- and object-replacement versions of the example are almost identical, as shown below:

Listing 13. Ant targets for different configurations

<target name="objectReplacement" description="...">
  <antcall target="compileAndRunTests">
    <param name="argfile" value="${src}/ajtest/objectReplacement.lst"/>
  </antcall>
</target>

[contents of objectReplacement.lst]
@base.lst;[A reference to files included in both configurations]
MockCustomerManagerHome.java
MockCustomerManager.java
ObjectReplacement.java.

<target name="callReplacement" description="...">
  <antcall target="deployAndRunTests">
    <param name="argfile" value="${src}/ajtest/callReplacement.lst"/>
  </antcall>
</target>

[contents of callReplacement.lst]
@base.lst
CallReplacement.java
RunOnServer.java

The Ant script passes the argfile property to the AspectJ compiler. The AspectJ compiler uses the file to determine which sources (both Java classes and aspects) to include in the build. By changing the argfile from objectReplacement to callReplacement, the build can change testing strategies with a simple recompilation.

Tools clipart.png Tip: Plugging in Cactus
The example application comes bundled with Cactus, which it used to execute the tests within the application server. In order to use Cactus, your test class must extend org.apache.cactus.ServletTestCase (instead of the usual junit.framework.TestCase). This base class will automatically talk to the tests deployed to your application server. Because the "callReplacement" version of the tests requires the server but the "objectReplacement" version does not, I use another feature of AspectJ (called introduction) to make the test class server aware. The source version of ClientBeanTest extends TestCase. If I want the tests to run on the server side, I add the following aspect to my build configuration:
public aspect RunOnServer{
  declare parents : ClientBeanTest extends ServletTestCase;
}

By including this aspect, I declare that ClientBeanTest extends ServletTestCase instead of TestCase, transforming it from a regular test case into a Cactus test case. Neat, huh?

To learn more about Cactus, see the Resources section.

This compile-time plugging of aspects can be a tremendous boon in situations such as aspect-assisted testing. Ideally, you do not want to have any traces of test code deployed in a production situation. With compile-time unplugging, even if testing aspects are intrusive or perform complex behavioral modifications, you can strip out your test harness at a moment's notice.

Conclusion

To keep the cost of test development low, unit tests must run in isolation. Mock object testing isolates each unit by providing mocked implementations of code on which the tested class depends. But object-oriented techniques cannot successfully replace collaborating code in situations where dependencies are retrieved from a globally accessible source. AspectJ's ability to crosscut the structure of the tested code allows you to cleanly replace code in these types of situations.

Although AspectJ does introduce a new programming model (aspect-oriented programming), the techniques in this article are easy to master. By using these strategies, you can write incremental unit tests that successfully verify components without needing to manage systemic data.

Resources

  • For an introduction to the potential as well as the uses of this powerful language extension, check out Nicholas Lesiecki's first article on AspectJ: "Improve modularity with aspect-oriented programming" (developerWorks, January 2002).
  • You can download AspectJ and its associated tools from http://www.eclipse.org/aspectj/. The site also hosts an FAQ, mailing lists, excellent documentation, and links to other resources on AOP.
  • JUnit.org is a great place to start if you're interested in this popular unit-testing framework, its many extensions, or articles and papers about unit testing in general.
  • Nicholas Lesiecki's book, Java Tools for Extreme Programming (co-authored with Rick Hightower), covers practical tools that help you implement XP practices such as unit testing and continuous integration. Almost all of the tools (JUnit, Cactus, Ant) used in this article are covered in this excellent reference.
  • If you're not using mock objects, you should at least use the ObjectMother pattern (PDF) to help you create and destroy test state. This introduction to Object Mother provides an alternative perspective to the one presented here.
  • The Cactus project enables easy testing of your server-side code. The Cactus team plans to provide a sample application illustrating how to integrate Cactus and AspectJ in the near future, so stay tuned.
  • The example application was built with Ant, freely available from Jakarta.
  • developerWorks has hosted a couple of excellent articles on getting started with Ant and JUnit: see Erik Hatcher's "Automating the build and test process" (August 2001) and Malcolm Davis's "Incremental development with Ant and JUnit" (November 2000).
  • For more information on the practices and methodologies of Extreme Programming, go to XProgramming.com.
  • If your testing needs go beyond unit testing into the realm of enterprise-level systems testing, see what the IBM Performance Management, Testing, and Scalability Services site has to offer. (This site includes a library on enterprise testing.)
  • You'll find hundreds of articles about every aspect of Java programming in the IBM developerWorks Java technology zone.

About the author

Nicholas Lesiecki entered the world of Java during the dot-com boom and has since risen to prominence in the XP and Java communities. Nicholas currently leads development of eBlox Inc.'s flagship online catalog system, storeBlox. As well as making frequent speaking appearances at the Tucson JUG he maintains active committer status on Jakarta's Cactus project. Nick co-authored Java Tools for Extreme Programming, a how-to manual for leveraging open source build and testing tools in agile processes such as XP. Nick wishes to thank Ron Bodkin, Wes Isberg, and Vincent Massol for their help in thinking through this article. Contact Nick at [mailtp:[email protected]|[email protected]].