BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Iterative, Automated and Continuous Performance

Iterative, Automated and Continuous Performance

This item in japanese

Our industry has learned that if we deliver intermediate results and revisit functional requirements we can avoid delivering the wrong system late. We have learned that if we perform unit and functional tests on a regular basis, we will deliver systems with fewer bugs. And though we are concerned with the performance of our applications, we rarely test for performance until the application is nearly complete. Can the lessons of iterative, automated and continuous that we've applied to functional testing apply to performance as well?

Today, we may argue that a build that completes with unit testing should be performed on an hourly, daily, or weekly basis. We may argue on 100% coverage vs. 50% coverage. We may argue and discuss and ponder about specific details of the process. But, we all pretty much agree that performing automated builds completed with unit testing on a regularly scheduled basis, is a best practice. Yet, our arguments regarding performance testing tend to be limited to, don't.

Premature or Just in Time

There are several reasons why performance testing gets put off to the end. Many of these reasons are very similar to why we rarely, if ever, automate the testing of our applications. Setting up an automated build takes time, effort and commitment. To justify to the business that it is in their best interest to make this commitment is simply difficult . After all, we are programmers and we are expected to crank out features and not spend our time testing. Testing is for the testers. Writing unit tests takes time, time which is better spent developing features and so on.

However, we have been able to sneak this into our development process, as organizing test code into unit tests only formalized what we were already doing. Thus the incremental investment needed to support this formalization wasn't all that large. Once businesses started to see the benefits, things have only gotten better. As much as one might believe that extending this to performance testing would be a natural progression, it simply hasn't happened. The investment needed to support performance testing is viewed as being much larger and the potential benefits are seen as being much smaller. After all, we can't do performance testing on a system that is under development as there is nothing to test; after all, isn't performance just a matter of more or better hardware?

There are a couple of reasons why the investment is viewed as being larger for performance testing. Unlike unit testing, performance testing isn't something that developers are already doing. This implies a new activity rather than the formalization of something that is already being done. Yet the unit testing that we do today is much more than the informal testing done prior to unit testing becoming a formal discipline. In this regard, there is a difference between the perceived and the actual investment needed to introduce formal performance testing into the development cycle.

There are other arguments against early performance testing: it is premature, there is nothing to test, very little can be gained by it, it is a micro-performance tuning, it is too granular to be useful as we can only performance test complete systems, setting up a performance test is too complex and takes too much time, the process is fickle and so on. These reasons are not without substance. If you talk to a manager from almost any performance testing group, you'll hear that the biggest consumer of time is just getting the application operational in a test environment. This task can be so arduous that it actually limits the number of applications they can test. Some have whispered to me that they can performance test less than 50% of all applications they've deployed.

There is no question that one should almost always void premature optimizations. This is especially true is the optimization is complex, time consuming to implement and the corresponding returns are unknown. For example, if we are sorting a list quite often a simple bubble sort is all we'll really need. We only need more complex sorts if the sort time is critical and the quantity of data warrants it. If we don't have a good handle on either of these requirements implementing a more complex sorting strategy would be premature optimization.

Testing Components

With continuous performance testing we need to focus on more granular aspects of our systems, components and frameworks. Just as is the case with unit testing, we can only expect to find certain classes of problems when we test these artifacts in isolation. A case in point is the contention between components of misuse of frameworks resulting in response times higher than expected; these are things that will only come out in a full integration test. However understanding how much CPU, memory, disk and network I/O we need, can help us predict and take preventive action (rather than apply a premature optimization).

On the question of cost, there is no doubt; performance testing will add cost of developing. Unlike functional testing, performance testing is not something that developers regularly check so there isn't a clear path to formalization as there was with functional testing. However there are two types of costs being considered: direct cost for the effort and the hidden cost of having to fix all of the performance problems as they randomly appear in the final build. The immediate economic reward (in terms of both money and time/schedule) is to performance test only at the end of the project's development cycle. But this is a false economic reward. It is said that with less testing you need fewer man hours to develop your application. Yet it does nothing to account for risk. You may have more money in your pocket if you drive with no auto insurance, but if you ever get into an accident you've lost. Given the number of "car wrecks" we witness in this industry, not testing is like driving without insurance.

Mocks for Performance

But there are things we can do to help reduce costs. Developers create mocks and other things needed to unit test. While the mocks will most likely not include the things that are needed for a performance test, in most cases they can be easily modified to do so. Take the mock for a credit card service found in listing one.

public class MockCreditAuthorizationServiceProvider
implements CreditAuthorizationServiceProvider {

private double rejectPercentage;
private Random random;
public MockCreditAuthorizationServiceProvider() {
// set the rejectPercentage from a property
random = new Random();
}

public void authorize(AuthorizationRequest request) {
if ( random.nextDouble() > rejectPercentage)
request.authorize();
else
request.deny();
}
}

Listing 1. Mock credit card authorization with denied simulation

The mock is setup for functional testing. It adheres to the functional requirements and it should validate a transaction according to some adjustable rate. This mock is good enough to test the functional requirements for the handling of both accepted and rejected credit card authorizations. However to test for performance we also need to mock the service level agreements that we have with the authorization service. The mock must not only authorize; it must do it in the time it normally takes to perform and authorize. If the system will only consider 5 authorization requests at a time, then this also needs to be encoded into the mock. These requirements have been added to our original mock as seen in listing 2.

public class MockCreditAuthorizationServiceProvider
implements CreditAuthorizationServiceProvider {

private double rejectPercentage;
private Random random;
private Expondential serviceDistribution;

public MockCreditAuthorizationServiceProvider() {
// set the rejectPercentage meanServideTime from a property
random = new Random();
this.serviceDistribution = new Expondential( meanServiceTime);
}

public void authorize(AuthorizationRequest request) {
try {
Thread.sleep( this.serviceDistribution.nextLong());
} catch (InterruptedException e) {}


if ( random.nextDouble() > rejectPercentage)
request.authorize();
else
request.deny();
}

Listing 2. Mock credit card authorization with denied and service time simulation

Yet another not so insignificant challenge is simply getting the application running in a suitable testing environment. But this has also been an issue for those doing functional testing and they've worked out a solution - do it continuously. The obvious solution for those wanting to do performance testing is to piggyback off that effort.

Tooling

In the beginning we had JUnit, a neat little tool that helped us organize our tests, execute them and show us the results. We had ANT, a tool written in anger of the complexities of Make. From these humble beginnings we are witnessing an explosion of tools to support continuous builds and unit testing. Yet there is seemingly little support for continuous performance testing. While it is true that none of the existing tools advertise support for performance testing, it does exist. As the lack of advertising may suggest, this support is limited.

The first limitation is in the type of testing supported. Currently we have ANT, Maven, and CruiseControl, but virtue of their integration with ANT, all have plug-ins to support the automated running of Apache JMeter. Apache JMeter came out of the need to performance test HTTP servers and applications. It supports other types of testing but this is limited to a few well defined components that include JMS, WS, and JDBC. However Apache JMeter is quite extensible and if we are to test our components, this is exactly what we'd have to do: extend Apache JMeter. Not an ideal solution in many cases. The only other choice is to hand roll our own stress testing harness. Once again, a less than desirable solution. While tool support may be weak, we expect that it will improve over time just as tool support for continuous testing has improved over time.

Case in Point

Should lack of tool support delay a push to continuous performance testing? The answer will depend a bit upon how adventurous your organization is willing to get. But before completely dismissing the idea of introducing it, consider this. There are a few organizations that have instituted a continuous performance program and though the evidence may be antidota,l the results have been promising. In one case, the end product was composed of the efforts of 6 different development teams. The performance tuning team asked that each of the teams run performance test during the development process. The component with the most performance difficulties was delivered by the one team that did not comply with the request.

Conclusion

Dave Thomas writes about broken windows. Just as continuous builds and unit testing fix "broken windows", continuous performance testing will also work to fix "broken performance windows".

Kent Beck has described continuous testing of automated builds using the analogy of driving a car. As you move down the road your eyes tell you what micro-adjustments need to be made in order to stay in the center or your lane. You wouldn't think of driving with you eyes closed, opening them only for a second to see where you are for fear of missing a curve or drifting out of your lane. When you are first learning it is hard, but it becomes easier over time. What they are saying is that by being iterative, automated, and continuous you are developing with eyes wide open.

Rate this Article

Adoption
Style

BT