BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Resin Can Now Act As a Drop-in Replacement for Memcached Solution

Resin Can Now Act As a Drop-in Replacement for Memcached Solution

This item in japanese

With the release of version 4.0.24, the cache for Caucho's Resin Application Server now provides a memcached interface for both the client and the server. This means that Resin can function as a drop-in replacement for memcached solutions. When Resin is used as both the client and the server, cache servers that are added or removed are automatically rescaled and updated in the client. Resin's implementation is written in Java, with some JNI performance enhancements.

Memcached is a general-purpose distributed memory caching system, originally written in C, that was initially developed by Danga Interactive for LiveJournal, but is now used by many other sites including YouTube, Reddit, Facebook, and Twitter. The system uses a client-server architecture, where the servers maintain a key-value associative array; the clients populate this array and query it. Keys are up to 250 bytes long and values can be, at most, 1 megabyte in size. In the standard memcached implementation, servers keep the values in RAM; if a server runs out of RAM, it discards the oldest values. Resin's approach though is slightly different. InfoQ spoke to Scott Ferguson, the lead architect for Resin's memcached implementation, to find out more about it.

InfoQ: How do you ensure compatibility with memcached? Also what, if any, memcached features do you not support?

Memcached is a published wire protocol. Resin memcached implements the memcached protocol. 

We tested it against memcached clients. Currently we support all memcached features. This means Resin can function as a drop-in replacement for memcached solutions. Resin's memcached support is durable and fully elastic, allowing for the addition or removal of nodes, as needed, without getting extra hits to your database. 

InfoQ: How does the performance compare with the standard memcached library?

We have not published any official benchmarks. Our internal benchmarks show that for some use cases it is a bit faster, for other use cases it is within 5%. Our implementation is fast. If you are doing a read dominate cache, our implementation should be equivalent to memcached proper with the added benefit of elasticity and persistence.

InfoQ: In your blog post you state that "Data is automatically persisted and replicated between Triad hub servers. The size of the cache is limited only by the size of hard disks on the host OS". Doesn't persisting to disk change the performance characteristics considerably?

Yes. If the value is served out of memory the speed is equivalent; if the value is served from disk it is slower. If you size your RAM large enough to hold all values then the performance will be on par with memcached. If you do not, then the performance will still be a lot faster than hitting a traditional database, but more likely on par with Membase. If you consider that the hot part of the cache will likely be in memory, then the overhead could be really low. It really depends on the use case and how you set things up. You can get performance as good as memcached. You are not limited to RAM for storage; elasticity is built-in so you don't get a lot of hits to your RDBM server when you add a new Resin memcached node. This is the major advantage. Resin's memcached is more elastic and easier to set up for cloud deployments. It ships with Resin too. It is easy to turn on and use.

InfoQ: But isn't this also a change in behaviour? If I've coded to expect the memcached behaviour, I won't remove old items from the cache, I'll just assume they'll get thrown away. So won't I end up with an ever growing cache?

No. Cache items can expire. You can set up an expiry just like you can with memcached. The actual eviction might happen later, but the behavior is the same. We also provide a JCache interface, so you can have characteristics of an infinite cache or fine grain controls and configuration of caches, as per JCache (JSR 107, which we track closely). Also, if you use JCache, we have implemented a GC-less in-memory cache; if the item is in this first level cache then the performance can be 1000x faster than going to the memcached tier. This is an easy setup that provides in-memory speeds backed by a memcached tier.

As Scott Ferguson noted above, the client API for Resin's memcached is based on JSR 107, JCache (javax.cache), a proposed Java interface standard for memory caches. The configured cache can be injected like any CDI object:

import javax.inject.*;
import javax.cache.*;
public class MyBean {
@Inject Cache _memcache;
}

To configure Resin as a memcached server you add the following <listen>

<resin xmlns="http://caucho.com.ns/resin"
    xmlns:memcache="urn:java:com.caucho.memcache">

  <cluster id="cache_tier">

    <server-multi id-prefix="cache-"
                  address-list="${cache_tier}"
                  port="6820">

      <listen port="${rvar('memcache_port')?:11212}"
              keepalive-timeout="600s" socket-timeout="600s">
        <memcache:MemcacheProtocol/>
      </listen>
    </server-multi>
  </cluster>
</resin>

Rate this Article

Adoption
Style

BT