BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Coordination Data Structures: New Classes for .NET Multithreading

Coordination Data Structures: New Classes for .NET Multithreading

This item in japanese

The June drop of Parallel Extensions for .NET added a set of classes to make sharing data in a multi-threaded application easier. With ten new classes including new synchronization primitives, futures, and new collection classes, there is only time to touch on each of them briefly.

This first batch of classes is in the System.Threading namespace.

CountdownEvent allows for coordination between an unlimited number of threads. The counter is either preset or incremented as each thread is started. As the threads complete their task they decrement the counter. A call to CountdownEvent.Wait blocks the main thread until the counter to reaches zero. In this sense the CountdownEvent is similar to AutoResetEvent and ManualResetEvent.

LazyInit is essentially what many people refer to as a Future. A LazyInit object takes a class or delegate. If it gets a class it will create a new instance with the default constructor when the Value property is called. If given a delegate the result of the delegate is stored and returned.

LazyInit has three modes for value creation. AllowMultipleExecution allows each thread to attempt to be the first to initialize the value, but ensures only one object is ever returned. This could result in multiple objects being created and discarded. EnsureSingleExecution guarantees that only one instance is ever created. Finally, ThreadLocal gives a different instance to each thread.

WriteOnce can be seen as an alternative to LazyInit. Like LazyInit, it's value can only be set once and becomes immutable thereafter. However, the value for WriteOnce is assigned externally to the class. Once set, it can never change.

WriteOnce is especially useful when you want read only semantics but cannot use the readonly modifier because you do not want to assign the value in the constructor. If you try to assign a value to a WriteOnce object more than once, it will be marked as "corrupted" and can no longer be read. Using TrySetValue instead of the Value property prevents this corruption from occurring.

ManualResetEventSlim is a lightweight version of ManualResetEvent. Unlike the older version, it does not rely on kernel objects and is not finalizable. This should result in better performance, especially when they need to be created frequently.

SemaphoreSlim, like ManualResetEventSlim, replaces the thin wrapper around the kernel with a lightweight alternative.

Two more lightweight objects, SpinLock and SpinWait, are really only suitable for multi-core and multi-processor machines. Both leave the blocked thread active, essentially wasting CPU cycles. They are useful when the expected wait time is very short and context switches have become a bottleneck.

The second group of classes is in the System.Threading.Collections namespace.

ConcurrentQueue is a queue structure designed with multithreading in mind. Older queues, even when "thread-safe" required locks so that you can check the Count property and call Dequeue as an atomic action. ConcurrentQueue avoids that pitfall by only offering a TryDequeue method. Since it is safe to call without checking the count, no explicit locks are needed.

ConcurrentStack works the same way, though obviously with stack semantics.

BlockingCollection, designed for multiple readers and writers, is a rather complex thing with many features one normally implements separately. First of all, BlockingCollection can be used on its own as a collection or the semantics can change by having it wrap an IConcurrentCollection object such as ConcurrentStack and ConcurrentQueue.

Unlike most collections, BlockingCollection supports something a method called GetConsumingEnumerable. This allows one to use a for-each loop or LINQ query against the collection in a thread safe manner. Normally destructive operations like consuming items from a queue would trigger an exception when using either construct.

In order to throttle your writers, BlockingCollections can be given an upper size limit. When this limit is exceeded, calls that add to the collection are blocked.

BlockingCollections also have the concept of being "complete". When you call CompleteAdding, consumers are notified that no new items will be added to the collection and that they can stop processing after the current batch.

Finally, BlockingCollections can also be used in groups. If you pass an object to AddAny along with an array of BlockingCollections, the object will be added to one of them. The documentation is still sparse, but presumably the collection selected by which is smallest. This call is blocking if all the collections are full.

 

Rate this Article

Adoption
Style

BT