ConcurrentDictionary, .NET 4.0’s New Thread-Safe Hashtable
The ConcurrentDictionary is a cornerstone in .NET 4.0’s greatly increased emphasis on parallel and concurrent programming. But before we delve into it, we offer a review of the problems in previous versions of .NET.
The first version of the hash table in .NET was System.Collections.Hashtable. While it wasn’t thread-safe, you could in theory get a thread-safe wrapper simply by calling Hashtable.Synchronized. Unfortunately this wrapper wasn’t actually thread-safe because of the way it was used.
Say, for example, you wanted to check if a key exists in the collection. If not, then you want to perform a non-repeatable operation, the result of which would be stored there. Even though both ContainsKey and set_Item are independently thread-safe, there is no way to directly compose them. Instead you have to take a lock on the SyncRoot, negating the whole reason you asked for a synchronized version in the first place.
When .NET 2.0 introduced generics and System.Collections.Generic.Dictionary, Microsoft punted the issue. Developers had to take explicit locks on their own or just
.NET 3.5 didn’t add anything techniques, but it did make them a whole lot easier to implement with the increased emphasis on functional programming. First of all, the idea of defining custom delegates disappeared. From that point on any well designed API would be expected to reuse the generic Action and Func delegates. Another benefit was the introduction of lambda notation to VB and a greatly improved notation in C#. As a result, developers could easily create their own synchronized wrappers with APIs such as this:
public TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory)
Unlike the earlier versions where developers had to muck about with locks, this method on the new ConcurrentDictionary looks like it hard to use incorrectly. Simply provide a key and a delegate that will be executed if the key doesn’t exist. A long as the function itself is thread-safe, everything should be atomic.
Well, no. To “avoid the myriad of problems that can arise from executing unknown code under a lock”, the valueFactory delegate isn’t executed under the locks. So there is the possibility of a race condition and developers need to ensure that the valueFactory delegate only performs repeatable operations.
If you need this functionality, you have to combine the ConcurrentDictionary class with the Lazy class. An example of this is included in the AsyncCache class, which is being shipped as a sample.
While subject to change, currently the ConcurrentDictionary class is implemented with lock-free reads. To improve performance, developers can provide an estimated number of writer threads. This will govern how many fine-grained locks the hash table uses.
You can learn more about the ConcurrentDictionary from Stephen Toub’s post.
or just "what"?
"Developers had to take explicit locks on their own or just...",just what?
Thanks for the article thought.