BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Transactions and SQL Server 2014’s In-Memory OLTP

Transactions and SQL Server 2014’s In-Memory OLTP

Transactions in SQL Server’s In-Memory OLTP are rather straight forward. While there are probably optimizations that are not discussed, the basic design pattern is fairly easy to follow and could probably be reused in other projects.

Transactions in SQL Server’s In-Memory OLTP rely on a timestamp-like construct known as a Transaction ID. A transaction uses two timestamps, one for the beginning of the operation and one that is assigned when the transaction is committed. While multiple transactions can share the same start value,

Likewise each version of a row in memory has a starting and ending transaction id. The basic rule is that a transaction can only read data when RowVersion.StartingId <= Transaction.StartingId < RowVersion.EndingId.

For a DELETE operation, the row version’s ending id is initially set to the transaction’s starting id. Then a flag is set to indicate that a transaction is in process.

UPDATE operations begin like DELETE operations with the setting of an ending transaction id on the previous row version. Then a new row version is created with a starting transaction id that is equal to the transaction’s starting id. The ending id is initially set to infinity and again an active transaction flag is set. The old row version also gets a pointer to the new row version.

An INSERT operation is the same as an UPDATE without the need to delete the previous row version.

Commit and Validation

The commit phase starts by assigning a unique transaction id to current transaction. Then a validation process begins in which the affected records are checked for isolation errors. The type of errors depend on the level of transactional isolation requested. Only three levels, Snapshot, Repeatable Read, and Serializable, are supported by memory optimized tables.

Snapshot

Just like with normal tables, inserts into memory optimized tables can fail if another transaction has attempted to insert a row at the same time. But the way it fails is a bit different. Normally one transaction has to wait for the other to complete, after which the losing transaction just sees the duplicate row and never make the insertion attempt.

Here we instead see both transactions insert their row. After which they will read back the data to see if they won the race. If they don’t error 41325 is raised with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Repeatable Read Transactions

MSDN has this warning about the repeatable read isolation level, “One important thing to note is that, because the repeatable read isolation level is achieved using blocking of the other transaction, the use of this isolation level greatly increases the number of locks held for the duration of the transaction.”

Since memory optimized tables don’t have locks, repeatable read works very differently for them. Rather than blocking other transactions, it rereads the rows at the end of the transaction. If any of them have changed, the transaction is aborted. This is reflected in error code 41305 with the message “The current transaction failed to commit due to a repeatable read validation failure on table [name].”

Serializable Transactions

Like Repeatable Read, Serializable Transactions traditionally relied on locks to keep other transactions from interfering with the data being examined. So instead it will check for to see if it failed to read any valid rows or encountered phantom rows. If either occurs then again the transaction will be aborted.

Post Processing

If validation is successful, the ending transaction id of each affected row version is set to the transaction’s ending id. Likewise the starting id for new row versions (e.g. from inserts and updates) is set to the transaction’s ending id. The active flags are cleared and the indexes are updated to point to the new records.

Garbage Collection

It should be noted that the indexes are not necessarily updated to remove pointers to the old row versions. Nor are the old versions deleted immediately.

Instead the Memory Optimized Tables require the use of a reference counted garbage collector. Details are not available yet, but based on the rest of the design its behavior is predictable. The GC will need to start at the indexes and check to see which of them point to out of date rows. When detected, it can decrement the reference counter and update the index to point to the most recent version of the row. If the counter reaches zero, then the row version is deleted.

The tricky part with the garbage collector is to know which rows to look at in the first place. One would speculate that simply iterating over all the rows of each index would be rather cost prohibitive.

Design Notes

When using In-Memory OLTP, developers need to be much more aware of their access patterns. If code isn’t written to avoid overlapping transactions then the resulting isolation level violations will make aborted transactions much more common than they would be using traditional tables.

For more information read SQL Server In-Memory OLTP Internals Overview for CTP1 by Kalen Delaney.

Rate this Article

Adoption
Style

BT