# A Test Strategy for Enterprise Integration Points

| Posted by Jeff Xiong 0 Followers , David Yin 0 Followers , Pengfei Cui 0 Followers on Jul 01, 2013. Estimated reading time: 18 minutes |

A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features.

Integration is a topic that can’t be ignored for enterprise applications, not only because integration with external systems could be error prone, but also because they are hard to test. This article introduces a commonly applicable testing strategy for integration points, which improves the coverage, speed, reliability and reproducibility of testing, and thus could be used as a reference for implementing and testing integration-heavy applications.

## Background

The system we are using as an example in this article is a typical Java EE Web application, developed with Java 6 and Spring, and built with Maven. This system integrates with two external systems via XML over HTTP.

This application is delivered by a distributed team: the business representatives are located in Melbourne, while the delivery team is located in Sydney and Chengdu. One of the authors, Jeff Xiong, is the tech lead of the Chengdu team, which takes a major part of the delivery effort.

## The Pain

The application needs to integrate with two external systems, and thus some of our test cases (written in JUnit) have to integrate with them, which makes the Maven build1 process unstable.

Reliability of external services is not guaranteed. One of our dependent services is also being developed and is shut down often, which causes our integration tests (and the whole build process) to fail. Our delivery team follow the practice of continuous delivery strictly, and will not check in any code when the build has failed. In such cases, the instability of the dependent service becomes a blocker for the delivery team.

Worse, the dependent services deployed in the development environment are not as well tuned as production, which can cause serious performance issues. Such practical drawbacks make our build process very slow and sometimes cause random failures.

As the unreliability and low performance of the external services will make the build process both fragile and very slow, it becomes a pain for the delivery team to build frequently. Furthermore it harms the efficiency of the continuous integration process. As a tech lead of the team, I hope to solve this problem so that the build can run quickly and reliably.

## How to test integration points

For applications built on top of Spring Framework, when they need to integrate with external systems, they usually do it through a Java interface. For example, a service that creates customers for a certain brand may look like this:

public interface IdentityService {
Customer create(Brand brand, Customer customer);

Spring instantiates a class that implements IdentityService and keeps that instance in the application context, then client code that requires the service can get a hold of that instance through dependency injection and call its “create” method. We can inject a mocked IdentityService instance into the client code when we write tests for it; this way we decouple the test code from the external service. This is a benefit of having dependency injection.

Since we don't have to worry about testing the client code, our focus goes to the testing of the integration points.

Integrating with an HTTP-based service in an object-oriented language, the integration points are usually designed in such a way that they consist of five major components: Façade; Request Builder; Request Router; Network End Point and Response Parser. The following diagram shows how they interact:

As you can see in this diagram, the Network End Point is the only component that reaches out to the outside world via HTTP requests. It sends a certain request to a certain web address in a pre-defined protocol, and returns the response. The Network End Point is usually defined like this for HTTP-based services:

public interface EndPoint {
Response get(String url);
Response post(String url, String requestBody);
Response put(String url, String requestBody);


Response class contains 2 pieces of information: HTTP status code and response body.

public class Response {
private final int statusCode;
private final String responseBody;


You may have noticed that the class EndPoint is in charge of sending a given request to a given address and returning the response from the external service. It doesn't care about what the address is (that's the Request Router's job), and it doesn't care about the contents of the request and response either (Request Builder and Response Parser respectively take care of them). This makes the EndPoint's tests totally independent of the actual external services.

## Testing for Network End Point

What class EndPoint is really concerned about is whether it is the correct way to send requests and retrieve responses —— "the correct way" may include authentication and authorization, necessary HTTP header information, etc. In order to test this class, we don't need to send requests to the address of the remote server and to obey the real request/response protocol. Instead, we can create our own HTTP server and test it with a very simple request/response.

Moco as a testing tool is designated for this scenario. According to the author's introduction, is "an easy setup stub framework, mainly focusing on testing and integration". To create an HTTP server only needs two lines of code - the created server will be listening on port 12306 and respond with string "foo" for any request:

MocoHttpServer server = httpserver(12306);server.response("foo");

Then we can access this HTTP server just as a real one with Apache Commons HTTP Clients. Only one thing that needs your attention: the code that interacts with the server has to be put in the "running" block so that the server can be closed accordingly:

running(server, new Runnable() {     @Override     public void run() throws IOException {       Content content =
Request.Get("http://localhost:12306").execute().returnContent();       assertThat(content.asString(), is("foo"));     }}

Of course, as a testing tool, Moco supports many flexible configuration options as well: please read its online manual if you are interested. For now, let's take a look at how to use Moco to test the Network End Point component in an integration point. As an example, we are going to integrate with OpenPTK which provides a bridge between Identity Solutions and specialized user interfaces or access points. OpenPTK uses a customized XML-based communication protocol, and it requires clients to send requests to address /openptk-server/login with the application name and password before every request to ensure the application is authorized. Therefore, we prepare a Moco server for testing as follows:

server = httpserver(12306);server.post(and(        by(uri("/openptk-server/login")),        by("clientid=test_app&clientcred=fake_password"))).response(status(200));

Then, we configure the network endpoint to access our Moco server located at localhost:12306 with the username and password:

configuration = new IdentityServiceConfiguration();configuration.setHost("http://localhost:12306");configuration.setClientId("test_app");configuration.setClientCredential("fake_password");xmlEndPoint = new XmlEndPoint(configuration);

Finally, our test fixture is ready. It's time to test that XmlEndPoint is able to access a specified URL with HTTP GET request, and retrieve the response:

@Testpublic void shouldBeAbleToCarryGetRequest() throws Exception {final String expectedResponse = "<message>SUCCESS</message>";  server.get(by(uri("/get_path"))).response(expectedResponse);  running(server, new Runnable() {    @Override    public void run() {      XmlEndPointResponse response = xmlEndPoint.get("http://localhost:12306/get_path");      assertThat(response.getStatusCode(), equalTo(STATUS_SUCCESS));      assertThat(response.getResponseBody(), equalTo(expectedResponse));    }});}

We need another test case to describe the scenario "login failure", so that our tests will cover all cases for the get method of class XmlEndPoint:

@Test(expected = IdentityServiceSystemException.class)public void shouldRaiseExceptionIfLoginFails() throws Exception {    configuration.setClientCredential("wrong_password");    running(server, new Runnable() {        @Override        public void run() {            xmlEndPoint.get("http://localhost:12306/get_path");        }    });}

Following this approach, it’s straightforward to create test cases for both POST and PUT methods as well. With Moco, we complete all tests for network endpoints. Although these tests involve real HTTP requests, they merely interact with a localhost server created by Moco, and only perform basic HTTP GET/POST/PUT requests. Therefore, these test cases are fast and reliable all the time.

## Testing for other components

Since we have tested the end points, the tests of the other components don't have to send any HTTP requests. Ideally, every component should be unit tested in isolation; but personally, I am not obsessed with isolation when the object to be tested has no external dependencies. I don't mind testing several objects in unison as long as all the cases are covered.

We'll test the Façade component (IdentityService) as a whole. We'll create a mocked instance of XmlEndPoint when we instantiate IdentityServiceImpl. This makes sure the code that sends HTTP requests is isolated from the following test2:

xmlEndPoint = mock(XmlEndPoint.class);identityService = new IdentityServiceImpl(xmlEndPoint);

Then we'll need the mocked instance of XmlEndPoint to behave according to the different conditions, so we can test the behaviors of IdentityService accordingly. Taking "find user" for instance, XmlEndPoint is documented to do the following:

1. When user is found: the HTTP status code will be 200, and the response body will contain user's information in XML;

2. When user is not found: the HTTP status code will be 204, and the response body will be empty.

For the first case(“user is found”), we expect the get method of XmlEndPoint to return a response whose status is 200 and whose body contains user information in XML:

when(xmlEndPoint.get(anyString())).thenReturn(       new XmlEndPointResponse(STATUS_SUCCESS, userFoundResponse));

When the mocked instance of XmlEndPoint is set up like this, the "find user" operation will be able to find a user and create a correct customer instance:

Customer customer = identityService.findByEmail("gigix1980@gmail.com");assertThat(customer.getFirstName(), equalTo("Jeff"));assertThat(customer.getLastName(), equalTo("Xiong"));

userFoundResponse is a String that contains user information in XML format. When this String is returned by XmlEndPoint, IdentityService converts it into an instance of Customer. Now we have verified that IdentityService (and the objects it depends on internally) behaves correctly.

The second case (“user is not found”) is tested similarly:

@Testpublic void shouldReturnNullWhenUserDoesNotExist() throws Exception {    when(xmlEndPoint.get(anyString())).thenReturn(new XmlEndPointResponse(STATUS_NO_CONTENT, null));    Customer nonExistCustomer = identityService.findByEmail("not.exist@gmail.com");    assertThat(nonExistCustomer, nullValue());}Other methods of IdentityService also can be tested similarly.

## Integration tests

After finishing the two levels of tests described above, we've already covered all cases for the five components of the integration point. But don’t let down your guard: 100% coverage doesn't mean that we've covered all the places where errors could occur. For instance, there still are two important missed places that are not verified yet:

1. The availability of the URL of the real remote services;
2. Whether the behaviors of those services are aligned with the document.

These two items must be tested in the real environment. Additionally, for the test cases against these items, it’s more important to describe the functionality than to verify its correctness. The reason for this is twofold. Firstly[5] the remote services rarely change; secondly, whenever the remote services have any error (such as being unavailable), there is nothing we can do to fix it. Therefore, the value of the integration tests that touch the real services is to provide an accurate and executable document.

In order to provide such a document, we should avoid using our own application integration points (such as IdentityService as mentioned above) since we hope the automation tests will tell us where the errors came from: the remote services or our own applications. I prefer to use the standard, low-level libraries to access those remote services:

System.out.println("=== 2. Find that user out ===");GetMethod getToSearchUser = new GetMethod(       configuration.getUrlForSearchUser("gigix1980@gmail.com"));getToSearchUser.setRequestHeader("Accept", "application/xml");httpClient.executeMethod(getToSearchUser);assertThat(getToSearchUser.getStatusCode(), equalTo(200));System.out.println(getResponseBody(getToSearchUser));

In this test case, we use the Apache Commons HTTP Client to initiate network requests. As to the response, we don’t need to verify it, but just confirm that the services are still available and print the body of the response (in XML format) that is used as a reference. As discussed above, we expect integration tests to describe the behaviors of the external services, instead of verifying their correctness. These test cases are enough to act as an “executable document”.

## Continuous Integration

We have seen several different kinds of tests now. Only the integration tests communicate with the external services, which makes it the most time-consuming test. Luckily, we don't have to run integration tests as frequently as the other tests, because integration tests only describe the behaviors of the external services, the functionalities of our own code are covered by the tests of the network end points (using Moco) and by the unit tests of the other components.

Maven can help us deal with this situation. Maven defines two phases in its lifecycle: test and integration-test. There is a Maven plugin called "Failsafe":

 The Failsafe Plugin is designed to run integration tests while the Surefire Plugin is designed to run unit tests. The name (failsafe) was chosen both because it is a synonym of surefire and because it implies that when it fails, it does so in a safe way.

Maven recommends running unit tests with Surefire and integration tests with Failsafe. So, we'll put all the integration tests in a package called "integration" and modify Surefire's configuration in the pom.xml to exclude this package:

<plugin><groupId>org.apache.maven.plugins</groupId>  <artifactId>maven-surefire-plugin</artifactId>  <version>\${maven-surefire-plugin.version}</version>  <executions><execution><id>default-test</id>      <phase>test</phase>      <goals><goal>test</goal>      </goals>      <configuration><excludes><exclude>**/integration/**/*Test.java</exclude>        </excludes>      </configuration>    </execution>  </executions></plugin>

And then we add the following configuration to run integration tests with Failsafe:

<plugin><artifactId>maven-failsafe-plugin</artifactId>  <version>2.12</version>  <configuration><includes><include>**/integration/**/*Test.java</include>    </includes>  </configuration>  <executions><execution><id>failsafe-integration-tests</id>      <phase>integration-test</phase>      <goals><goal>integration-test</goal>      </goals>    </execution>    <execution><id>failsafe-verify</id>      <phase>verify</phase>      <goals><goal>verify</goal>      </goals>    </execution>  </executions></plugin>

Now the command "mvn test" won't execute integration tests; and the command "mvn integration-test" will execute both unit tests and integration tests. We can create two jobs in our continuous integration server (e.g. Jenkins): one is triggered by every commit and executes everything except integration tests, the other runs once per day, and executes the whole build. Now, we've got a balance between speed and quality: every commit triggers a build that covers all the functionality, which is very fast and is not affected even when external services are unavailable; the daily build covers all the external services, it keeps us informed about the latest status of the external services.

## Refactoring the existing system

By following the pattern mentioned above to design the integration endpoints, a system could be easily guaranteed to be testable; but looking at the existing system (which is not well designed and has no logical concept of the network endpoint), it couples the logic of remote communication with other logic. It is hard to write specific tests for the real network communication, because those tests would initiate lots of real network requests, making the build slow and unreliable.

The example below is a very common code structure, which couples several responsibilities: preparing request body; initiating request; handling response.

  PostMethod postMethod = getPostMethod(    velocityContext, templateName, soapAction);  new HttpClient().executeMethod(postMethod);  String responseBodyAsString =
postMethod.getResponseBodyAsString();
if (responseBodyAsString.contains("faultstring")) {    throw new WmbException();  }  Document document;  try {LOGGER.info("request:\n" + responseBodyAsString);    document = DocumentHelper.parseText(responseBodyAsString);  } catch (Exception e) {throw new WmbParseException(e.getMessage() + "\nresponse:\n" + responseBodyAsString);  }
return document;

This code could appear in each method that is used to integrate with the remote services, and it is duplication, a kind of bad code smell. But the duplication in those methods is not very obvious, because the logic for each method (like preparing the request body, handling the response, etc.), is different. For instance, the code sample above is using Velocity to generate the request body and using JDOM to parse the response. Even automated code inspection tools (such as Sonar) can't find them.

After applying some refactoring approaches, like Extract Method, Add Parameter, Remove Parameter, etc., we can restructure the code as following:

// 1. prepare request bodyString requestBody = renderTemplate(velocityContext, templateName); // 2. execute a post method and get back response body    PostMethod postMethod = getPostMethod(soapAction, requestBody);    new HttpClient().executeMethod(postMethod);    String responseBody = postMethod.getResponseBodyAsString();if (responseBodyAsString.contains("faultstring")) {throw new WmbException();}// 3. deal with response body    Document document = parseResponse(responseBody);return document;

Now, the duplication is more obvious in the second block. The book “Refactoring” describes this scenario3:

 If you have duplicated code in two unrelated classes, consider using Extract Class in one class and then use the new component in the other. Another possibility is that the method really belongs only in one of the classes and should be invoked by the other class or that the method belongs in a third class that should be referred to by both of the original classes. You have to decide where the method makes sense and ensure it is there and nowhere else.

This is the situation we are facing, and this is when the concept of the “network endpoint” should be introduced. By using Extract Method and Extract Class, we will create a new class SOAPEndPoint:

public class SOAPEndPoint {  public String post(String soapAction, String requestBody) {    PostMethod postMethod = getPostMethod(soapAction, requestBody);    new HttpClient().executeMethod(postMethod);    String responseBody = postMethod.getResponseBodyAsString();    if (responseBodyAsString.contains("faultstring")) {      throw new WmbException();    }    return responseBody;    }

The original code will use the new class SOAPEndPoint:

// 1. prepare request bodyString requestBody = renderTemplate(velocityContext, templateName);
// 2. execute a post method and get back response body// SOAPEndPoint is dependency injected by Spring Framework    String responseBody = SOAPEndPoint.post(soapAction, requestBody);// 3. deal with response body    Document document = parseResponse(responseBody);return document;

By following the above test strategy, we should add tests for SOAPEndPoint with Moco. Frankly speaking, the logic of SOAPEndPoint is very simple: send POST request with specified content to specified URL; throw exception if the body of the response contains string "faultstring"; otherwise return the string directly. Although the class name is SOAPEndPoint, the method "post" is not concerned about whether the request/response follows the SOAP protocol, and therefore the string returned from Moco doesn't need to conform to the SOAP protocol either during the testing--as long as the tests cover both when the response body contains string "faultstring" and when it doesn't.

Given this, you may be wondering why the class is called SOAPEndPoint? The answer is that in the method getPostMethod (which is not presented here), we need to fill in some HTTP headers, and these headers are mostly related to the Web Services provided by the remote services to be integrated. These headers are adequate for service methods on the client side, so they could be extracted to a common method: getPostMethod.

Next, we could write some descriptive integration tests, and use mocking techniques to ensure that any references to SOAPEndPoint do not initiate any network requests.

Now, we have completed the refactoring to all integration points, and created a group of tests that follow our test strategy. As an exercise for the reader, you could continue refactoring to split the Request Builder and Response Parser.

## Wrapping up

The build of a Java EE web application that heavily relies on external services usually gets slow and unreliable because of dependencies on those external services. We have identified a pattern for implementing integration points. Using this pattern and corresponding testing strategy, with the help of Moco, we managed to isolate our tests from the external services and we made our build faster and more reliable.

We've also taken a look at some existing implementations of integration points, and refactored them into the pattern, so we can apply the testing strategy to the existing code as well. And that proves the testing strategy works universally: even a legacy system can achieve implementation decoupling and isolate tests by adopting the refactor techniques.

## More about Moco

At ThoughtWorks Chengdu office, we are developing online applications for a financial company. Because all the data and core business rules of the company are stored in COBOL backend systems, the online applications inevitably have large amounts of integration work to do. Most of the teams in the office complain that test cases are becoming slow and unreliable because of the integration with dependent remote servers. In order to ease the pain, my colleague Zheng Ye created the Moco framework to simplify the integration tests.

Apart from the API mode we've already seen in the above test cases, Moco also supports a standalone mode, which is aimed at creating a test server rapidly. For instance, the following configuration (located in a file "foo.json") describes a basic HTTP server:

[  {    "response" : {      "text" : "Hello, Moco"    }  }]

Start the server:

java -jar moco-runner-<version>-standalone.jar start -p 12306 -c foo.json

If you access any URL under location "http://localhost:12306", "Hello Moco" will be displayed on the screen. Because Moco has all kinds of flexible configuration options, we are able to simulate any remote server that is going to be integrated with, and equip it for local development and functional testing.

Thanks to the power of the open source community, Moco got a Maven plugin that came from Garrett Heel, a developer from Australia. With his help, we are able to embed Moco into our Maven build process very easily, and start or stop the Moco server according to our needs (for instance, start Moco server before Cucumber functional tests and stop it after it's done).

Currently, Moco is being utilized by several projects at ThoughtWorks Chengdu office. Development is still ongoing based around new requirements raised by these projects. If you are interested in Moco, feel free to provide improvement suggestions or contribute to it.

## About the Authors

Jeff Xiong is an Office Principal of ThoughtWorks Chengdu. He is also an enterprise application developer with over 10 years experience.

David Yin is a Java developer with more than 6 years experience. He is passionate about new technology and methodology. He is now working in ThoughtWorks as a consultant.

Pengfei Cui is a developer of ThoughtWorks, a programmer who enjoys deleting code. He illuminates three square meters of space around him with nothing but his head.

1 In this context, “build” means “a process generating a deliverable software application from source code using automation tools”. A normal build process of Java EE applications includes phases such as compilation, static code quality checking, unit testing, integration testing, packaging, functional testing, etc.
2 We use mockito to mock dependent objects in unit tests.
3 Martin Fowler et al, Refactoring, chapter 3.1.

Style

## Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

## Get the most out of the InfoQ experience.

### Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Related paper: Test-Driven Development in Enterprise Integration Projects

From ThoughWorks too and, though old, still well worth a read: www.eaipatterns.com/docs/TestDrivenEAI.pdf

Similar to SoapUI mock services?

Moco sounds like a great framework and well done on open sourcing it. As a Java developer I have experienced similar problems. Our solution was to create mock services in SoapUI and to run them via Maven. How does Moco differ from this solution?

mock-http-server

I have created a similar framework but next to supporting mocking it also supports logging requests/responses by using a proxy that sits in between the 'System Under Test' and the real services. Once requests/responses have been logged they can be used to initialise mock-http-server which will replace the real services.

I found the logging of request/responses useful because mocking complex request/responses by hand can be cumbersome. You can find it here: github.com/kristofa/mock-http-server
Close

#### by

on

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

3 Discuss

Login to InfoQ to interact with what matters most to you.