BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles SQL Server 2014: NoSQL Speeds with Relational Capabilities

SQL Server 2014: NoSQL Speeds with Relational Capabilities

For the last four or five years Microsoft has been working on the first rewrite of SQL Server’s query execution since version 7 in 1998. The goal is to offer NoSQL-like speeds without sacrificing the capabilities of a relational database.

At the heart of this endeavor is project Hekaton, their memory optimized tables. While still accessible via traditional T-SQL operations, internally they are a fundamentally different technology. This technology was conceived to address three trends in server hardware:

Memory is Getting Cheaper

SQL Server used to have ways to pin tables in memory. But this feature turned out to be bad for performance and was deprecated nearly a decade ago. Back then holding whole tables in limited memory systems just didn’t make any sense. Now with the wide spread availability of 64-bit processors and continuously declining memory prices, holding large databases in memory becomes more tenable.

Stalling CPU Speeds

While the complexity of CPUs continue to increase, clock speeds have been stalled for nearly a decade. So to see gains in linear performance beyond what caching can offer, more efficient code needs to be run.

This is where T-SQL’s historic design decisions come into play. Currently queries, both ad hoc and stored procedure based, are only compiled to an intermediate language. From there they are interpreted rather than JIT-compiled into machine code. This allows for a large amount of flexibility, but at the cost of performance.

In the past this was acceptable, as loads increased so did the CPU’s ability to handle serial workloads. But now that is no longer an option, Microsoft has decided to build a new execution engine that relies on fully compiled machine code.

As we’ve mentioned in previous reports, this new engine is only available for stored procedures. What you get in exchange for giving up dynamic queries is highly optimized C code that is generated specifically for the tables you are using.

Multicore Processors

It is getting hard to find even a smart phone that isn’t running multiple cores. With enterprise grade servers it isn’t unusual to see 48 or more cores. With multicore processing being the way of the future, it is vital that hot tables be optimized for concurrent access.

This means SQL Server has to get away from locks and latches. A latch, considered to be a relatively cheap operation, can require one to two thousand CPU cycles. A lock is even worse, requiring roughly 10,000 cycles.

Instead of these, the data structures used by in memory tables use interlocked exchanges. This drops the cost down to only 10 to 20 cycles. Of course there is a trade-off for this, without locks queries against in memory tables are more likely to be aborted, necessitating a manual retry of the operation.

Management and Application Design

With most NoSQL offerings the bulk of the cost isn’t in acquiring the database but rather deploying it. Microsoft hopes to reduce those costs by offering a gradual migration path to their alternative.

The first cost in moving to a NoSQL database is identifying the hot tables that should be moved. Profiling tools can identify which tables are under the greatest load, but they can’t tell you how difficult it will be to migrate each candidate table.

Once identified, the next cost is actually rewriting the application to abide by NoSQL conventions. This can be rather daunting as it tends to mean a fundamental change in how data is accessed.

With SQL Server’s memory optimized tables, the changes are largely invisible to the application. As long as the table doesn’t contain any prohibited data types, the process can be largely automated.

The next performance gain comes from using fully compiled stored procedures. For most enterprises systems still relying on stored procedures for most or all of their data access this shouldn’t be too difficult. And again, it is likely that the application will not need to be changed.

Microsoft has noted that ISVs, which tend to support multiple databases, tend to be more reliant on ORMS and thus have more problems with this migration. Fortunately most ORMs now support stored procedures so this migration can be made over time.

File System Changes

Traditional tables and indexes are stored in pages. These are fetched and stored using random access, which can lead to performance problems if the data is fragmented. For this reason many companies are turning to expensive solid state drives to house their hottest data.

Since memory optimized tables don’t use a page layout, Microsoft took the opportunity to redesign the way data is stored on disc so that all I/O is sequential. This makes it possible to good performance from traditional hard drives.

This is made possible because the data section of records are never updated in a memory optimized table. Instead the header of the row is given a pointer to a new row containing the new timestamp and the updated values.

On disc, rows are organized into pairs of files grouped by timestamp ranges. These two files are known as “data” and “delta”. When an insert or update occurs, new records are written to the data file. For updates and deletes, the ID for the old row version is added to delta file.

If SQL Server is restarted, these pairs of files are read together. Any row in the data file that isn’t also in the delta file is pushed into memory. It takes roughly 1 second per GB to load a memory optimized table in this fashion.

Do note that SQL Server will be unavailable until all memory optimized tables are loaded.

Memory Concerns

A significant restriction on memory optimized tables is that they have to stay in memory. Under no circumstances can a memory optimized table be paged to disc in part or in whole. So what happens when the size of the memory optimized tables grow beyond the amount of available memory?

Well the short answer is “bad things”. Memory optimized tables will take over memory normally allocated to the buffer pool. Eventually the buffer pool won’t have enough memory to operate correctly and SQL Server will become unstable.

The work-around for this is to setup a resource governor to limit the amount of memory that memory optimized tables are allowed to consume. Only one such governor is allowed and the space it allocates is shared across all databases on that instance of SQL Server.

Only about 80% of the memory optimized table memory space is actually available for data storage, the rest is needed for book keeping. So when you calculate your maximum data size, add about 25%.

Future Plans

Memory optimized tables represent a first step in SQL Server’s transformation and there is still a lot of work to be done. Many of the version one limitations such as no foreign key or check constraints will be lifted in future versions. Work also needs to be done on the transport layer, which has not yet been updated to take advantage of the layout for memory optimized tables and compiled stored procedures.

All images courtesy of Microsoft.

About the Author

Jonathan Allen has been writing news report for InfoQ since 2006 and is currently the lead editor for the .NET queue. If you are interested in writing news or educational articles for InfoQ please contact him at jonathan@infoq.com.

Rate this Article

Adoption
Style

BT