Eight Isolation Levels Every Web Developer Should Know
The ACID properties are one of the cornerstones of database theory. ACID defines four properties that must be present if a database is considered reliable: Atomicity, Consistency, Isolation, and Durability. While all four properties are important, isolation in particular is interpreted with the most flexibility. Most databases provide a number of isolation levels to choose from, and many libraries today add additional layers which create even more fine-grained degrees of isolation. The main reason for this wide range of isolation levels is that relaxing isolation can often result in scalability and performance increases of several orders of magnitude.
Serializable consistency is one of the oldest and highest isolation levels that is generally available, and many choose it due to the simple programming model it provides - only one transaction can execute at a time against a given resource, and many potential sources of problems are removed. However, most applications (particularly web applications) cannot assume this very high level of isolation because it is impractical from the end user perspective - any application with a non-trivial number of users would quickly experience delays of several minutes accessing shared resources, which would rapidly reduce the number of users of that application back to a trivial number. Weak and eventual consistency are common in large distributed data sources such as the Web, and several very large and successful web-based applications (e.g. eBay and Amazon) have shown that optimistic weak consistency is much more scalable than traditional pessimistic mechanisms. This article takes a look at eight different isolation levels that you can use to potentially gain more performance and scalability in your applications by learning to relax data consistency constraints.
The main goal of concurrency control is to ensure transactions are isolated and do not interfere with one another. Higher degrees of isolation are achieved at the expense of potential performance gains. Concurrency control is implemented by a pessimistic or optimistic mechanism. Most relational databases, which are write-optimised, use a pessimistic mechanism. Pessimistic mechanisms use locks and may block operations or use some form of conflict detection. Pessimistic blocking is done when a table, page, or row has been modified, preventing other transactions from accessing potentially modified resources. However, optimistic mechanisms do not use any locks and rely solely on conflict detection to maintain transaction isolation. Conflict detection, as used by optimistic mechanisms, permits all read operations and verifies consistency at the end of the transaction. If a conflict is detected then the transaction is rolled back or repeated. Most web servers are read-optimised and thus use an optimistic mechanism. By permitting all read operations, optimistic mechanisms can achieve a higher read and write throughput while still preserving data consistency when resources are not continually changing.
The isolation levels listed below are here to help Web developers better understand the constraints placed on their programming models, and to engage system architects and developers in discussions to choose the most efficient isolation levels while maintaining necessary data consistency. They are listed from the least isolated (Read Uncommitted) to the most isolated (Serializability).
1 Read Uncommitted
Read uncommitted isolation level requires little isolation between transactions. Every read operation may see pending write operations from any transaction (dirty reads). However, committed write operations must have a serial order to prevent dirty writes. A pessimistic mechanism will block conflicting write operations until others are committed or rolled back. An optimistic mechanism will not lock and will allow everything to go through. If a connection is rolled back, all other connections that made subsequent modifications to the same data will also be rolled back. Shared caches are permitted in this level without validation. This isolation level is best used when transactions are not needed (such as a read-only dataset) or are modified with exclusive access to the database.
Example: An archive database that is only updated while offline, or an audit/logging table that is not used within a transaction
2 Read Committed
Read committed may read any committed state of the system and may be cached without validation (mixed states) as long as changes in the current connection are reflected in the results. Pessimistic mechanisms implement this as a Monotonic View. Optimistic transactions store all changes in isolation, making them only available to itself until committed. Read committed is implemented with an overly optimistic mechanism that delays writing all changes until the transaction is committed. This form of optimistic isolation permits complicated write operations without blocking read operations and has no validation schema. Shared caches are permitted only for committed states. This isolation level is best used when older values are permitted in results and transactions are only use for write operations.
Example: An online forum, when the absolute latest postings may not necessarily be shown and posts don't conflict with each other
3 Monotonic View
Monotonic view is an extension to read committed where transactions observes a monotonically increasing state of the database as it executes. In this level, a pessimistic transaction may be blocked during read operations if there is an outstanding write transaction. Optimistic transactions behave like read committed, keeping their changes in isolation, but validate their cache to ensure it is still valid. Periodically synchronized database clones are permitted in this level. This isolation level is best used when transactions are not needed or transactions only contain write operations.
Example: A user preference tables that are modified only by one person
4 Snapshot Reads
Snapshot Reads extends monotonic view and guarantees that query results reflect a consistent snapshot of the database. A pessimistic mechanism will block other write operations from affecting the results while they are being read. An optimistic mechanism will allow other write operations and inform the reading transaction if any of the results have changed and may roll it back. To implement an optimistic mechanism, a validation must be performed at the end of the read operation to detect if any concurrent write operations modified the result, and if so the result maybe repeated or rolled back. This validation may simply check if write operations occurred in the same table, or it might check the query results for the changes. This optimistic isolation level can detect conflicts easily and favours write operations, while permitting concurrent read operations. This level permits periodically synchronized database clones so long as they provide snapshot reads. It is best used when write operations are low or unexpected to conflict with concurrent read operations and when query results need to be consistent.
Example: A currency conversion or lookup table that is queried more often then it is modified and only the newest values are kept,
5 Cursor Stability
Cursor Stability isolation extends read committed and is the default isolation level of many relational databases. In this isolation level, a pessimistic transaction must indicate which records it will modify when reading them, if done in a separate statement. This is often done using 'FOR UPDATE' keywords appended to the end of a 'SELECT' query. In this case, other conflicting read or write pessimistic transactions will be blocked until the transaction is finished. An optimistic transaction tracks the version number of all modified records/entities to be verified when committed. This is the most popular optimistic isolation level and is provided by all major object-relational mapping libraries. In the Java Persistence API, this level can closely be achieved using FLUSH_ON_COMMIT (although queries may not reflect local changes), and if a conflict is detected an OptimisticLockException is thrown. This isolation can also be used with the HTTP headers If-Match or If-Unmodified-Since that compare a previous resource's version or time-stamp before updating. This level of isolation is best used for entities that are modified based on external information (not read from the database) and changes must not overwrite each other.
Example: A shared company directory or a wiki
6 Repeatable Read
Repeatable Read isolation extends cursor stability and guarantees that any data read within the transaction will not be modified or removed during the transaction. A pessimistic transaction will acquire read locks on all records and block other transactions from modifying them. An optimistic transaction will track all records or entities and verify they have not been modified when committed. This level of isolation is best used when entity states can affect other entities and transactions are made up of read and write operations.
Example: An order-tracking database, where values are read from one entity and used to compute values for other entities.
7 Snapshot Isolation
Snapshot isolation extends snapshot reads and repeatable read and guarantees that all read operations made in a transaction will see a consistent snapshot of the database. Any read operation performed in a transaction will have the same result regardless of whether it was performed earlier or later in the transaction. This differs from repeatable read isolation because it prevents phantom reads (range query results changing). This level is supported by many relational databases in the form of multi-version concurrency control (maybe called SERIALIZABLE), which is pessimistically implemented using a combination of locks and conflict detection. In this level, transactions must be prepared to be rolled back due to conflicts from either a pessimistic mechanism or an optimistic mechanism. A pessimistic mechanism will try to reduce the chances of a conflict by locking resources, but must merge changes when transactions are committed. An optimistic mechanism may also use a multi-version concurrency control, but would not block other transactions from engaging in potentially conflicting operations, instead it would roll back transactions that were found to conflict. This level of isolation is best used for transactions that read and modify multiple records.
Example: A workflow system, with rules based on the state of the system.
Serializability is an extension of snapshot isolation that specifies all transactions must occur as if they had executed serially, one after the other. A pessimistic mechanism acquires range locks for all evaluated queries, preventing write operations from affecting these results. An optimistic mechanism tracks all evaluated queries and either uses a backwards validation scheme or a forwards validation scheme at the end of the transaction to detect if any concurrent write operations affect concurrent read operations, and if so, rolls back all but one of the conflicting transactions. In this isolation level, the apparent state of the system by any committed transaction will not have changed. This level of isolation is used for transactions that require complete data consistency.
Example: An accounting system that performs range queries to compute new values.
Below is a summary of the isolation levels outlined in this article, to help you find the level that is most appropriate for your application.
Types of possible collisions between transactions in different isolation levels:
|Dirty Writes||Dirty Reads||Mixed states||Inconsistent reads||Overwrites||Non-repeatable||Phantom Reads||Inconsistency|
|Read Uncommitted||Not permitted||Permitted||Permitted||Permitted||Permitted||Permitted||Permitted||Permitted|
|Read Committed||Not permitted||Not permitted||Permitted||Permitted||Permitted||Permitted||Permitted||Permitted|
|Monotonic View||Not permitted||Not permitted||Not permitted||Permitted||Permitted||Permitted||Permitted||Permitted|
|Snapshot Reads||Not permitted||Not permitted||Not permitted||Not permitted||Permitted||Permitted||Permitted||Permitted|
|Cursor Stability||Not permitted||Not permitted||Permitted||Permitted||Not permitted||Permitted||Permitted||Permitted|
|Repeatable Reads||Not permitted||Not permitted||Permitted||Permitted||Not permitted||Not permitted||Permitted||Permitted|
|Snapshot Isolation||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Permitted|
|Serializability||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted||Not permitted|
Optimistic requirements for different isolation levels:
|Cache||Data Sync||Optimistic Conflict Scheme||Suggested Operations||Example|
|Read Uncommitted||Cache permitted||Sporadic||Detect dirty writes||No concurrent read and write||Archive|
|Read Committed||Cache permitted||Sporadic||No conflict detection||Monotonic read/write||Web Forum|
|Monotonic View||Must be validated||Periodic||No conflict detection||Combined reads||User preferences|
|Snapshot Reads||Must be validated||Periodic||Compare modifications to reads||Consistent reads||Lookup table|
|Cursor Stability||Cache permitted||Sporadic||Compare modified entity versions||CRUD services||Directory|
|Repeatable Reads||Cache permitted||Sporadic||Compare read entity versions||Read/write entities||Order tracking|
|Snapshot Isolation||Must be validated||Periodic||Compare read entity versions||Synchronized entities||Work-flow|
|Serializability||Must be validated||Full Sync||Compare queries with modifications||Complete data consistency||Accounting|
Data consistency is vital in database applications -- it allows developers to make sense of data within a concurrent environment. Although strong consistency levels such as serializability provide a simple programming model, they can cause excess overhead, blocked operations, or transaction rollbacks and may be unnecessary for many applications. Being aware of other, potentially more appropriate isolation levels can help ensure that developers and system architects understand the data consistency needs, while balancing performance tradeoffs.