BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Building a Better Thread-safe Collection

Building a Better Thread-safe Collection

Leia em Português

This item in japanese

Bookmarks

There are some fundamental problems with most thread-safe collections. While individual operations are thread-safe, the operations are usually not composable. Common operations such as checking the count on a stack prior to popping the top item is inherently dangerous. There are APIs that try to combine actions such as .NET 4’s Coordination Data Structures, but that leads to clumsy methods like TryDequeue.

Another attempt was seen in .NET 1’s collections. Rather than making locking internal, they exposed it via the SyncRoot property. While SyncRoot will remain the default name for synchronization objects, the SyncRoot/Wrapper design pattern was dropped for .NET 2.

So how is one create composable APIs that are actually usable? Jared Parsons proposes that you don’t expose the API directly. Instead, you expose all the methods via a temporary object that is created at the only available while you hold a lock on the object. This temporary object is the “key” to the collection, and only a holder of the key can get at its contents.

Here is an example of Jared Parsons’ thread-safe queue.

static void Example1(ThreadSafeQueue<int> queue) {
using (var locked = queue.Lock()) {
if (locked.Count > 0) {
var first = locked.Dequeue();
}
}
}

The object named locked isn’t thread-safe itself, developers are expected to do the right thing and only use it within a “using” block. But as long as they obey this simple rule, all the operations inside the block are safe. Jared expands on this:

As with most thread safe designs, there are ways in which this code can be used incorrectly

  1. Using an instance of ILockedQueue<T> after it’s been disposed.  This though is already considered taboo though and we can rely on existing user knowledge to help alleviate this problem.  Additionally, static analysis tools, such as FxCop, will flag this as an error. With a bit more rigor this can also be prevented. Simply add a disposed flag and check it on entry into every method.
  2. It’s possible for the user to maintain values, such as Count, between calls to Lock and use it to make an incorrect assumption about the state of the list. 
  3. If the user fails to dispose the ILockedQueue<T> instance it will be forever locked.  Luckily FxCop will also flag this as an error since it’s an IDisposable.  It’s not a foolproof mechanism though.
  4. There is nothing that explicitly says to the user “please only use ILockedQueue<T> for a very short time”.  IDisposable conveys this message to a point but it’s certainly not perfect.
  5. The actual ILockedQueue<T> implementation is not thread safe.  Ideally users won’t pass instances of IDisposable between threads but it is something to think about.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Transactional Memory could solve these problems(and introduce new ones ;))

    by Peter Veentjer,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Non composable operations are a typical problem with lock based approaches. With Transaction Memory you don't need to have this problem (depending on the STM implementation). So you can compose operations:


    atomic{
    String item = fromQueue.pop();
    toQueue.push(item);
    }



    Depending on the implementation, you also get atomicity. So if the pop from the firstQueue succeeds, but the push on the second doesn't, the item remains on the fromQueue.

    Another problem that can be fixed with STM's is listening to multiple 'structures'.


    atomic{
    if(!queue1.isEmpty())
    return queue1.pop();
    if(!queue2.isEmpty())
    return queue2.pop();
    retry();
    }


    In this case a thread blocks if both queues are empty and wakes up as soon as an item is added on one of the queues. Try to realise that normal concurrent structures without exposing the concurrency internals.

    The previous example could also be written with a OrElse



    atomic{
    {
    return queue1.pop();
    }orElse{
    return queue2.pop();
    }
    }

  • Very clever

    by Cameron Purdy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It's not often that someone comes up with something clever out of stuff that's been around a long time. I'd be curious to see how efficiently this can be implemented, and if the implementation is still compatible with the non-composable forms of the same.

    Peace,

    Cameron Purdy
    Oracle Coherence: Data Grid for Java, .NET and C++

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT