Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Clojure 1.1 Adds Transients, Chunked Sequences for Efficiency

Clojure 1.1 Adds Transients, Chunked Sequences for Efficiency

This item in japanese

Clojure 1.1 RC1 is out - now's the time to give it a spin and report any issues before the final release. The work is done on the Clojure 1.1 branch on GitHub, a binary package of Clojure 1.1 RC1 is available at Google Code. There is also a continous integration server watching and building Clojure branches.
The Changes file for 1.1 list the differences to Clojure 1.0, such as the issues closed before 1.1. There are also a few big new features that will allow to optimize the performance of Clojure programs.

Transients can significantly improve performance when building up persistent data structures. Persistent data structures are an important element of Clojure, eg. as the concept behind Clojure's Vectors, Maps and Sets (see Clojure creator Rich Hickey's talk on persistent data structures) In short, persistent data structures are immutable; the only way to modify them, by insertion or removal, is to create a new copy of the data structure. But there's a trick: the internal organization of the persistent data structure and the fact that all the items are immutable, allows to share all of the data and most of the structure, thus creating the copy will only have a small overhead.

Despite the small overhead for copies, there are situations when its necessary to insert a lot of elements into a data structure. Transients are the solution. In short, the idea is to turn a persistent data structure into a transient before a series of modifications; the call transient achieves that. The transient version of the data structure supports the same access functions as the persistent version, but for modification, it's necessary to use different functions suffixed with a "!", eg. conj! (instead of conj).

One way to think of the relationship between persistent data structures and their transient versions is the relationship between a java.lang.String and a java.lang.StringBuilder; one is immutable and changes to it cause the creation of a new copy, the other allows for in-place modification.
That's where the similarities stop, however. First off, creating a StringBuilder means copying the contents of the String, an O(n) operation. Turning a persistent data structure in its equivalent transient is cheap: an O(1) operation; it's done by creating a new object for the transient, which contains a new root object for the data structure, with a flag that marks it as a transient; no data copying happens. Once the work is done and the transient data structure needs to become persistent, a similar O(1) operation makes it happen.

But why is it even necessary to turn the transient data structure back into a persistent one? Wouldn't it be possible to use the transient indefinitely? No - transient data structures have one important restriction: they can only be used from one thread. The reason is simple: since the transient is mutable, using it from different threads would be dangerous and require synchronization. Persistent data structures make it simple to share data structures across threads; transients allow one thread to modify a data structure in place - and then make it accessible to other threads by turning it back into a persistent data structure.

Chunked Sequences are another optimization in Clojure 1.1. For a quick overview see slides from Rich Hickey's talk on chunked sequences (PDF Warning).
The basic idea behind chunked sequences is to cut down on the overhead introduced by (lazy) sequences.
Lazy sequences are used throughout Clojure and allow delaying doing work until it's absolutely necessary. In some cases most of the work is never done, eg. this code:

(take 10 (range 1 1000000000000)  )

range creates a lazy sequence that generates all the numbers in the given interval; take asks the sequence 10 times for the next element. With the lazy approach, 10 numbers are requested and only 10 are calculated.
The іmplementation uses the lazy-seq macro (in core.clj), which allows to write the code concisely. The problem: accessing the next element in the lazy sequence has some management overhead. Chunked sequences aim to reduce the overhead by generating chunks of elements at a time and buffering the values; the chunk size is 32, which means the per-step overhead is only paid every 32 elements.

Another way to use optimize sequences with chunked sequences is to exploit the organization of a data structure. A persistent vector, for instance, is organized as a tree, where 32 element arrays carry the data. To access an element, it's necessary to walk the tree to find the array that holds the element. A primitive sequence that would always use an index to access the next element would incur the cost for walking the tree at every access. The chunked sequence implementation of persistent vector avoids this: it finds the 32-element array where the sequence starts and can then serve each of these 32 elements quickly with a simple array access; walking the tree is only necessary after 32 elements have been accessed and the next tree node needs to be fetched.

It remains to be seen how chunked sequences are received; they obviously have advantages, but as the Clojure 1.1 Changes file points out:

Consumption of chunked-seqs as normal seqs should be completely transparent. However, note that some sequence processing will occur up to 32 elements at a time. This could matter to you if you are relying on full laziness to preclude the generation of any non-consumed results. An interface to force single-item consumption of chunked seqs is still being designed. Please share any use cases where chunked processing has resultedin behavioral differences that matter to you on the Clojure google group.

There are more features in Clojure 1.1, for more information see the Clojure 1.1 Changes file.

For more information on Clojure check the InfoQ Clojure tag. Rich Hickey's talks are highly recommended, eg. Persistent Data Structures and Managed References. InfoQ also features a video interview with Rich Hickey taped at QCon London 2009.

Rate this Article