Eight Quick Ways to Improve Java Legacy Systems
You read the title right: even Java systems can be "legacy" systems. When many of us think about legacy systems, we think of creaking mainframes storing data in flat files and crunching it with COBOL. But the truth is that Java is a 15-year-old language and many of the thousands of systems written in it have been running successfully for ten years or more.
So, given that many readers might work on legacy Java applications, here are eight tips I've gathered from my experience helping teams modernize and re-invigorate their legacy Java applications.
Tip #1: Use a Profiler
Profilers offer the kind of insight into an application that can't be had any other way. If your application has not been profiled in more than a year, then it almost certainly will have inefficient chunks of code lurking in dark corners. There are many different free profilers and commercial profilers available on the market. My favorite tool for CPU profiling is JProfiler because it is powerful enough to diagnose most common problems and yet is simple to set up, especially if you use its built in set up wizard. For diagnosing memory problems my favorite tool is the Eclipse Memory Analyzer because it uses on-disk indexes instead of sucking the entire heap snapshot into memory.
Hidden CPU hogs commonly include inefficient hashCode() or equals() methods (which can get called millions of times in scrolling JTables and Java collections classes) and some surprisingly inefficient Sun classes like SimpleDateFormat.
Profilers can introduce a significant drag on your application, so you should make sure to do profiling in a test environment.
Tip #2: Monitor Database Usage
Profilers show details about where your application uses too many CPU cycles. They will also hint at places where your application makes long-running database calls. But a better tool for monitoring database usage is a tool like Proactive DBA or HP Diagnostics or any of the tools that came from the vendor of your database platform. These tools will clue you in to code that makes long-running SQL calls as well as code that makes too many short calls in a row. Using tools like these, it's not uncommon to find a 90-second database query that can be tuned to, say, 0.20 seconds. Tools from database vendors can also help find queries that are blocking each other waiting for locks; although in my experience blocking problems are less common than simple, inefficient use of SQL.
I have also written a new tool called jdbcGrabber that will allow you to visualize what tables are being accessed from which pieces of code. With this kind of visual representation, you may easily find code that is making multiple round trips to the database for different bits of information where it could instead make one, consolidated trip.
Tip #3: Automate Your Build and Deployment
Many legacy systems lack a fully-automated way to build their binaries, much less a fully-automated way to deploy them. Automating the build and deployment is a straight-forward and low-risk way to improve developer productivity on a legacy application and usually requires zero code changes.
Without an automated build and deployment, new developers are forced to reinvent the wheel, struggle with the same issues their predecessors struggled with, and invent different solutions to recurring deployment problems each time they happen.
While Maven is an excellent and widely-used build tool, it is also opinionated about the way your source tree and library dependencies are structured and so might be difficult to use on a legacy application. But good old sturdy Ant might be easier to swallow because it is more flexible when dealing with legacy code structures and easier to adopt piecemeal instead of whole-hog.
Tip #4: Automate Your Operations and Use JMX
One other way to improve productivity in a legacy application without making risky code changes is to improve its operations. Many internally-developed enterprise systems require a surprising amount of hand-holding and maintenance even though they shouldn't need to.
Existing Java functionality can be easily exposed to operations people without much overhead by using JMX. Many developers are familiar with JMX because they've used it to interface with application containers like JBoss and WebLogic, but not very many are familiar with how easy JMX-enabling their own applications can be. Any arbitrary Java class can be exposed over JMX with very little overhead and not much risk.
For example, if your application has a home-grown cache that is basically a static HashMap, you can expose functionality to clear that cache easily via JMX.
Once operations on an application are exposed via JMX, operations teams or developers can manipulate the application in well-specified ways, without needing direct access to the machine the application is running on.
Tip #5: Wrap in a Warm Blanket of Unit Tests
One of the biggest barriers to modifying legacy systems is knowing if your changes will break anything. Some tools claim to reverse engineer code and automatically create unit tests for it, but I don't have much confidence in these kinds of tools. To be truly confident that your unit tests cover what you think they cover, you will have to create them yourself.
Luckily, creating unit tests for legacy code isn't as difficult as it feels at first. I use the "legacy code change algorithm" Michael Feathers describes in Working Effectively with Legacy Code:
- Identify change points
- Find test points
- Break dependencies
- Write tests
- Make changes and refactor
The trick to effectively following this algorithm is #3: Break dependencies. There are many techniques for doing this, but most of them are some flavor of removing static references and hiding external resources and complicated code behind interfaces and facades. Once you've developed a feel for this kind of dependency breaking, touching old code becomes less and less scary.
Tip #6: Kill Dead Code
While dead code may seem harmless, it's actually a silent killer. The reason is that as long as dead code is in the code base, maintenance programmers can never really be sure if the code is dead or just looks dead. Maintenance programmers with any scars from previous changes will know that even static code analysis can't prove code is really dead. For example, some clever developer 10 years ago might have decided to invoke business logic through Java reflection driven by string values in a database (don't laugh, I've seen it more than once).
So, killing dead code should be a top priority. While it's commonly thought of as a unit test coverage tool, Emma can also be used to detect dead code. Emma is a library that, when injected into your JVM, will track which code was executed and which was not. Combine that with a thorough test cycle in your staging environment and you have some good pointers to code that is or is not alive.
Tip #7: Adopt a 'compliance to building codes' Approach
Legacy applications will never be cleaned up all at once. In reality, the development team has to use every opportunity it can to bring legacy code into the modern world. But many teams are too discouraged by the current condition of the code to think about how they'd like things to look. "It's too far gone," the developers say.
This apathy is a huge mistake. Legacy applications survive because they are useful and like all useful applications their users will continue to want to modify them. If the team takes the time now to define an (achievable) vision for where they want the application to go, then with every future incremental change they can take one step closer to that target vision.
Without a vision, each member of the team will do whatever he or she thinks is best. One will weave in Spring JdbcTemplates while another will start using iBATIS/MyBatis. While each has a genuine desire to improve the application, they will actually make it worse because they are taking it in different directions and compounding its already complicated structure.
Tip #8: Upgrade Your JRE
Some teams are still surprised when I tell them Sun Microsystems (now Oracle) sunsetted their support for JDK 1.5 way back in November, 2009. It's more than high time to upgrade that JRE to a more modern one like 1.6. Battle-scarred teams who remember upgrades from 1.1 to 1.2 or from 1.4 to 1.5 might be hesitant to make the jump, but my experience has been that this upgrade is pretty smooth and will give applications a significant performance boost for free. Additionally, JDK 1.6 came with many useful, free operations and profiling tools to help diagnose those garbage collection problems you've suffered through all these years.
Where to go From Here
Each of these tips were picked to be easy to adopt and relatively low risk. But there are numerous other, slightly more involved, ways to improve a legacy application and make it like-new.
First, the ecosystem of open source libraries available today blows away what was available when most legacy Java systems were written. Many legacy systems have home-grown, fully custom versions of: workflow engines, rules engines, templating engines, user interface frameworks and object-relational mapping layers. Every one of these home-grown components can be replaced with a free, open-source library that is more robust and battle-hardened. Such a one-for-one replacement would likely eliminate vast swaths of difficult to maintain code in one stroke.
Secondly, now is a good time to take a hard look at your legacy application's design choices. While changing a design is more involved than just upgrading a JRE, it can also give you a bigger return on investment. Applications with lots of logic in database stored procedures might consider pushing that logic up into the application tier where it can benefit from clustered servers and easier unit testing. A design that ties the presentation tier too closely to business logic can be split apart, which might make that snazy new iPhone interface easier to implement. Synchronous calls between sub-systems can be converted to asynchronous, message-based ones with an accompanying improvement in resilience and performance.
Lastly, to squeeze anywhere from two to four healthy years out of a Java legacy application I recommend hiring an expert with experience dealing with such systems. Like a surgeon doing delicate brain surgery, experienced experts can often find targeted, surgical fixes for problems in legacy systems that bring a lot of benefit with less risk.
And, for those looking for more in-depth reading, the best book I have ever read on working with legacy applications is Michael Feather's Working Effectively with Legacy Code. Anyone working on a legacy application will be well served by buying the book.
About the Author
Tim Cull is an experienced software developer and architect. As a founder of Thedwick, LLC, a boutique software consultancy specializing in high-leverage software development, Tim has helped many clients extend and enhance their legacy Java systems and preserve their technology investments. Tim's previous publications include his blog (http://www.thedwick.com/blog) and a recent article in IEEE Software (subscription required). For more information please contact email@example.com or visit http://www.thedwick.com
blind use of profilers
Nice summary of how to fix up an application. Since you're most likely not going to be able to run all combinations of profilers, you're going to have to pick one. Most likely you're going to pick an execution profiler. Next question, which type, inclusive, exclusive, elapsed, CPU? Each will pick up problems that the other will miss and they all will point to something. The question is; does that something matter? It depends upon your motivation for using the profiler but then if you just blindly use the profiler, you have no motivation and consequently no basis to make an informed decision. Or maybe you get lucky....
Re: blind use of profilers
As you wisely pointed out in your QCon session about performance tuning, you're best off having an idea what you're looking for with a profiler before starting out using one. Without too much trouble, one can look at the production machine and see what CPU utilization looks like (CPU), look at garbage collection logs to see what memory utilization looks like (inclusive), and use OS tools to see what network and disk usage looks like (elapsed). From there, you know what kind of profiling will be most productive.
That said, however, I'm a fan of a time-limited, iterative approach. First, spend 10-15 minutes looking at the server to see if anything jumps out at you. Next, hook up a profiler and just poke around for 30-60 minutes to see what jumps out at you. Some noticeable percentage of the time (let's say 30% for sake of argument) a team that hasn't done this work in a year or more will find something glaring. Then they can fix that, look like heroes, and (most importantly) make an easier case to management for spending more time doing things more methodically for even better results.
The key is to limit this first foray to no more than a couple hours. If it's not paying off, then start taking a more methodical approach.
Legacy systems might suffer from all sorts of problems (security, performance, scalability, usability etc.). But in order to tackle any of these the basic structure of the system needs to be improved.
What good is it to know there are performance problems when I don´t have a clue as to how to efficiently get rid of them?
So I´d expect a list with this heading to start with tip #5.
And I´d expect such a list to contain more extensive advice on how to improve the architecture/structure of legacy code.
GUI problem cannot easy upgrade JDKs