Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Terracotta's BigMemory Aiming to Eliminate Garbage Collection for Java Caches

Terracotta's BigMemory Aiming to Eliminate Garbage Collection for Java Caches

This item in japanese

Terracotta is the latest vendor to try to address the problem of garbage collection pauses in Java applications. The GC pause problem is particularly pertinent to applications that make heavy use of caching. Many collectors make a generational distinction between old and young objects, handling the younger generation concurrently but falling back to a stop-the-world pause for handling the older generation. By putting more longer lived objects in memory a cache can exaggerate the problems that occur when these long-lived objects have to be managed directly. Terracotta's solution is BigMemory™ for Enterprise Ehcache which uses its own memory management system specifically designed for the product.

"Developers today use time-consuming techniques to address large data sets – for example, when using lots of VMs with small heaps," said Ari Zilka, CTO of Terracotta, in a statement

BigMemory for Enterprise Ehcache makes the black art of GC tuning a thing of the past. Companies can fully utilize the capacity of modern servers to achieve the performance gains of in-memory data while simultaneously consolidating the number of servers in the data centre.

BigMemory can be seen as a competitor to Azul's Zing product, which brings their pauseless garbage collection to Intel and AMD based servers. The two products however take very different approaches. Whilst Azul's solution uses software techniques to provide a garbage collection algorithm which runs concurrently with the application, and therefore requires Azul's JVM, BigMemory aims to reduce the load on the Garbage Collector by managing the data placed in the cache off heap, much as you might with a program written in C. As such, applications that are not currently using a cache code will require code changes, but conversely for an application already using a cache, such as a Hibernate Cache, the JVM does not need to change.

InfoQ spoke to Amit Pandey, Chief Executive Officer at Terracotta, to find out more about the product. Pandey explained that whilst Terracotta's Ehcache supports both single and multi-nodes, around 80-85% of Terracotta's users were using a single node cache. Whilst these customers might not yet feel ready to jump to a fully distributed architecture they do have issues of scale and performance. For these users BigMemory offers an alternative.

"What they are running into is that when they try to expand the size of the data set that they put in memory or on heap, they are running into garbage collection issues and performance issues. So therefore they are restricted to using a fairly small footprint."

Pandey told us that initially Terracotta had needed to solve the GC problem for their own Java-based server and took the decision early this year to develop their own memory manager, still written in Java, which is able to side-step the garbage collector. Having done so they decided to integrate it into the standard Ehcache product and release it for sale in the market. According to Pandey, whilst most customers struggle to get a heap to 4GB or so

For the Java world we're offering the ability to put a lot of their data into Ehcache. We've tested out to well over 100GB and we see a flat line when it comes to response times and SLA and maximum GC pauses, because basically we don’t do GC pauses any more. So if your application is doing a 1 second GC at a 1 GB heap and you put Ehcache in and put things off heap and, still in memory, into Ehcache you can go well over 100GB and GC time remains exactly the same.

The following chart, courtesy of Terracotta, is broadly conceptual but modeled after a real application, and illustrates the point

Big Memrory SLA


We went on to discuss a little around how the memory manager works. Pandey told us

...We're not doing garbage collection. We're doing memory management very much the way it would be done in other languages. What we're doing is putting things into our data structure which is a flat data structure. So I don't have all the generational issues and so on and so forth, the app is taking care of those things. So you've got some stuff on heap and that is handled the way it is normally handled, but you can make the heap size very small and then put everything else into our data structure and we handle the management of that. What we've done is some very clever algorithms that take care of how we handle fragmentation and issues like that, and because we're basically doing it off-line we're not slowing the application down.

Whilst the target market for BigMemory is mainly people who are not ready to build a fully distributed architecture, the product does work with the distributed cache as well as the single node.

The product is currently in beta with a GA release expected in October. Pricing information should be available nearer the release date.

Rate this Article