BT
x Your opinion matters! Please fill in the InfoQ Survey about your reading habits!

Optimizing ORM Performance

Posted by John-Daniel Trask on Mar 07, 2012 |

Some developers dislike using object-relational mappers (ORMs) as they’ve had bad experiences with performance.

Like any abstraction, there is some overhead involved with using an ORM, but the reality is that a well-tuned ORM can give hand written data access code a run for its money. What’s more, a good ORM makes it easy to implement performance optimisations and tweaks, whereas hand‑crafted data access code requires a lot more effort.

In this article we will be looking at common problems and solutions, using the Mindscape LightSpeed ORM as the basis for the examples. We’ll consider a simple sales and dispatch Web application and look at how we can improve the performance of some example pages.

The N+1 Problem

One thing that would be useful to see in our Web application is a list of overdue orders. That’s easy enough, but we’d also like to see which customer each order is for. And superficially, this seems easy enough:

var overdues = unitOfWork.Orders.Where(o => o.DueDate < today);
foreach (var o in overdues) // 1
{
  var customer = o.Customer; // 2
  DisplayOverdueOrderInfo(o.Reference, customer.Name); 
}

This code conceals a common problem known as the N+1 problem. Getting the overdue orders (point 1 in the code) requires a database query. But then the code gets the customer for each of those orders (point 2 in the code), and that results in another database query! So if there are 100 overdue orders, the code ends up executing 101 database queries – the initial one for the overdue orders collection, and 100 subsequent ones for each of the customers. In general, for N overdue orders we would execute N+1 database queries – hence the name.

Obviously, this is extremely slow and inefficient. We can address this using a technique called eager loading. If we could load all of the required customer associations as part of the initial orders query, in the same database round‑trip, then accessing Customer would just be a property access – no database query required, no N+1 problem.

In LightSpeed, we can eager load an association by changing its Eager Load setting to True (or, for hand‑coded entities, by applying the EagerLoadAttribute). When LightSpeed queries for an entity with an eager-loaded association, it generates additional SQL to fetch the associated entities as well as the ‘primary’ ones.

(Click on the image to enlarge it)

In our example, when we apply Eager Load to the Order.Customer association, then when we issue a query for Order entities, LightSpeed builds SQL to load both the Order entities and the Customer entities, and sends both SQL SELECT statements in the same batch. The N+1 database round‑trips have been reduced to one, just by toggling a switch.

Reflection on ORMs and Hand‑Coded Data Access

As an aside, this illustrates why ORMs can provide a benefit for performance. Suppose that the overdue orders page had been written with hand‑coded SQL and manually copying values from the data layer into objects. When the N+1 performance problem manifested, you would need not only to update your SQL, but also to update your mapping code to handle the multiple result sets and manage the relationship. For our simple example, this isn’t too much effort, but what if the page drew in data from a longer chain of tables? It’s a lot more work than just flipping an option or applying an attribute! Using an ORM means we can adopt optimisations with minimal effort.

Lazy Loading

The overdue orders page has another potential problem. Suppose that the Customer entity has a Photo property which can contain a large image file. (This is the sort of thing that happens when you let the sales department make feature requests.) The overdue orders page doesn’t need the Customer.Photo property, but it gets loaded anyway along with the rest of the Customer entity. If the photos are large, that’s going to consume a lot of memory and take a long time to pull across from the database – all wasted.

The solution to this is to have the Photo property lazy loaded – that is, to load it only when it is accessed, rather than as part of normal Customer entity load. Because the overdue orders page doesn’t access the Photo property, it won’t pay the cost of the property being present; whereas other pages that do need the photo, such as a customer profile page, would still be able to access the property transparently.

There isn’t a simple flag to mark a property as lazy loaded. Instead, you can mark the property as part of a named aggregate – we’ll talk about what this means in the next section – by entering a name into the property’s Aggregates setting. A property which is part of a named aggregate is lazy loaded by default.

(Click on the image to enlarge it)

So if we set the Photo property’s Aggregates to “WithPhoto”, it won’t be loaded by the overdue orders page, and we’ll avoid wasted memory, reduce data volume and improve speed.

Named Aggregates (Includes)

Our solutions to the N+1 and bulky property problems mean the overdue orders page is now much snappier. But they could be having knock-on effects on other pages on the site. For example, consider the order detail page, which loads an Order and displays information about it. Because Order.Customer is now eager loaded, this page is now being encumbered with a Customer entity it doesn’t need. It seems that whether we eager load Customer or not, some page is going to end up inefficient!

Ideally, the Order.Customer association should be eager loaded on the overdue orders page, but lazy loaded on the order detail page. And we can do this by making it part of a named aggregate.

Named aggregates recognize that different pages need different parts of the object graph. A named aggregate is a set of associations and properties which are conditionally eager loaded – in any given query, they are eager loaded the query asks for them and lazy loaded if it does not. (‘Named aggregate’ is the LightSpeed terminology. Some other ORMs offer a similar feature called ‘includes.’)

To make Order.Customer part of a named aggregate, we set its Eager Load setting back to False. This makes the order detail page happy. Then, to enable the overdue orders page to work efficiently, we add an aggregate – say, “WithCustomer” – to the Order.Customer’s Aggregates box.

(Click on the image to enlarge it)

Now we have to modify the overdue orders page to specify the WithCustomer aggregate on its Orders query. This is a simple matter of adding the WithAggregate property to the LINQ query:

var overdues = unitOfWork.Orders
                         .Where(o => o.DueDate < today)
                         .WithAggregate("WithCustomer");

A similar consideration applies to individual properties. Remember that to make the Customer.Photo property lazy, we already made it part of the “WithPhoto” aggregate. But this is inefficient on the customer profile page, which always wants the photo. But by adding WithAggregate("WithPhoto") to the customer query on the customer page, we can make it efficient again.

Named aggregates give you a great deal of control, and abstract away the details of the potentially complex eager load graph behind a simple string. Depending on your performance needs, you can carry out a great deal of tuning just by tweaking aggregates on your most expensive or heavily-used pages.

Batching

Let’s turn our attention to the order entry page. An order consists not only of order-level properties such as a reference number, but also of a collection of order lines. When the user submits the order entry page, the application needs to create an Order entity and a number of OrderLine entities, and insert all of these into the database.

This is a bit like the N+1 problem in reverse: if we have 100 lines in the order, we will need to perform 101 inserts. But we don’t want to have to perform 101 round trips to the database!

LightSpeed addresses this problem with batching. Batching means that instead of sending each INSERT (or UPDATE or DELETE) as a separate command, LightSpeed collects up to ten of them together, and sends that batch as a command. The result of this is that, for large updates, LightSpeed requires only one-tenth the number of round trips compared to the naïve method.

The good news is that we don’t need to do anything to take advantage of update batching. LightSpeed automatically batches CUD operations, so the order entry page will automatically get the advantage of faster persistence.

Caching – first level

Let’s now look at how our application implements security. Assume we have a User entity. This User entity governs permissions; it also has attributes like the user name, and preferences which affect how various widgets in the application display their data. The upshot of this is that some pages may end up loading the current User from a variety of places – the controller to check its permissions, a banner to display its name, a data widget to determine its display preferences. But this would be inefficient. If the User entity is already in memory, it’s way faster to reuse the existing entity than to re‑query the database.

While design improvements such as a MVC architecture could mitigate this problem in some cases, a more general solution is to implement a first level cache. LightSpeed is built around the unit of work pattern, and its UnitOfWork class provides a first level cache. Applications which follow the recommended ‘unit of work per request’ pattern therefore automatically get a first level cache scoped to the page request. That means that for the duration of your page request, if you query by ID – including implicit queries like traversing a lazy loaded association – and the unit of work already has an entity with that ID, LightSpeed will skip the database query and return the existing entity. You can’t get faster than that!

Most full ORMs incorporate a similar feature – for example the NHibernate session object includes a first level cache. However, it’s an area where many Micro‑ORMs fall down since they often focus solely on efficient entity materialization. A full ORM will try and be as efficient as possible when querying, but also try and be smart enough to not need to query in the first place.

First level caching is automatic. Our User entity will get reused for the duration of the unit of work (and therefore the page request), without any action on our part.

Caching – second level

Let’s suppose that our order management system needs to handle multiple currencies – orders can be placed in dollars, euros or yen. In order to display currencies in a friendly way, we’ll need to store several bits of information about each currency – for example its name (US dollar), code (USD) and symbol ($). No problem: define a Currency entity and off we go!

Between eager loading and the first level cache, this is already pretty efficient, but it’s still a bit wasteful. Since each page request gets its own unit of work, and the first level cache is scoped to the unit of work, the application ends up querying the Currency table on every request that involves a currency. But currency definitions are reference data – they almost never change. We don’t really need to query the database for the latest details on every page request. It would be more efficient to query the database once, and cache the reference data somewhere it can be shared between page requests.

This is where second level caching comes in. LightSpeed’s second level cache lives longer than a single UnitOfWork, so entities stored here can be kept around as long as you like. (How long they are actually kept can be configured using an expiry setting.). LightSpeed includes implementations that use the ASP.NET cache or the open-source memcached library, a powerful cache that can span several servers. Some other ORMs offer second level caching but many don’t.

By having LightSpeed cache Currency entities in the second level cache, we can have them always available without the expense of a database query. To do this, you must first choose a cache implementation and specify it in configuration. Then caching Currency entities is as simple as selecting the Currency entity and setting its Cached option to True.

(Click on the image to enlarge it)

Compiled Queries

We’ve looked at a lot of techniques for improving performance in the presence of particular usage patterns, but one thing we haven’t tackled is the overhead of translating between C# LINQ expressions and the SQL that LightSpeed eventually sends to the database. This overhead affects every query in the code. It’s generally trivial compared to the cost of executing the database query, but if you really need to squeeze the last drop of performance out of your servers, it might still be worth trying to get rid of it. At this point you might be tempted to drop down to hand‑coded SQL, but with modern ORMs including LightSpeed you can still enjoy the convenience of LINQ without the translation overhead.

The way LightSpeed gets rid of translation overhead is using compiled queries. A compiled query is built out of a normal LINQ query: it translates the LINQ query into a ready-to-execute form, then saves that ready‑to‑execute form so that you can re-run the query multiple times without LightSpeed needing to re-translate it each time. You get the performance of raw SQL without having to write or maintain a SQL script yourself.

In fact, counter-intuitively, a compiled query can be faster than hand-coded SQL. This is because when LightSpeed executes hand‑coded SQL, it can’t make any assumptions about the shape of the result set. By contrast, when LightSpeed executes a compiled query, it can make assumptions about the shape of the result set, because it composed the SQL. This enables it to make some optimisations when materializing entities from a compiled query.

Compiling queries is a bit more intrusive than some of the techniques we’ve looked at. (Some ORM vendors are investigating techniques for making this less intrusive.) The reason for this is that you need to store the compiled query somewhere, and executing a compiled query requires a different API because you can only vary parameters that were you left variable during compilation.

Let’s take a look at a query for customer orders, the sort of thing we might need for a customer history page:

int id = /* get the customer ID from somewhere */;
var customerOrders = unitOfWork.Orders.Where(o => o.CustomerId == id);

If we think this query is being executed a lot, and we need to squeeze the maximum performance from it, we can compile it using the Compile() extension method. As part of this, we also need to replace the local variable id with a parameter that the compiler will leave to be filled in when the query is executed. Here’s how the compilation stage looks:

var customerOrdersQuery = unitOfWork.Orders.Where(o => o.CustomerId == CompiledQuery.Parameter<int>("id")).Compile();

As you can see we’ve just replaced the id local variable with the CompiledQuery.Parameter(“id”) expression, then called the Compile() extension method. The result is a CompiledQuery object which we would typically store as a member of a long‑lived object or static class. We can now execute the CompiledQuery as follows:

int id = /* get customer ID from somewhere */
var results = customerOrdersQuery.Execute(unitOfWork, CompiledQuery.Parameters(new { id }));

(If you’re really determined, you can tune parameter value resolution to get the absolute utmost performance out of the query, as mentioned in this article.)

Conclusion

Many developers see object-relational mappers as trading performance for convenience.  However, modern ORMs encapsulate techniques like eager loading and update batching that are complicated to implement in hand-coded data access layers.  These techniques mean that ORM code can perform just as well as the hand-coded DAL, without the cost of mastering and maintaining complex SQL and handwritten mapping code.  It’s easier to fix a N+1 performance problem by flipping a bit or tweaking a mapping file than by changing your SQL to a nested SELECT and your mapping code to handle multiple result sets.

Not all ORMs offer all the features we’ve described in this article, but most modern ORMs offer at least some of them.  The important thing is to understand where your application has database-related bottlenecks.  With the techniques from this article and an ORM that supports them, you can solve many such bottlenecks, enabling you to maximise your application performance at minimal cost and risk.

About the Author

John-Daniel Trask is a co-founder of Mindscape, a developer-led component vendor with thousands of customers globally. John-Daniel is a Microsoft MVP for ASP.NET and has been developing software for 20 years.

 

 

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Incorrect image for the second level caching example by Nagarajan Krishnamurthy

Hi,

The example to explain second level caching talks about setting that flag to true for Currency entity, but the image shows Customer and Order entities. Please correct it if possible.

Nagarajan
Bangalore, India

Fair enough, for the sweet spot by Alex Reis

John-Daniel,

While I do agree most people blame ORMs for their own mistakes, the implementations existing today in .NET (EntityFramework, NHibernate, Castle), and Java (JPA, JDO and friends) all suffer from the same problem IMO: poor support for remoting.

If you're doing all of your work inside the server, and shooting plain HTML to the browser, ORM is great, and with your suggested tweaks, you can make it performant. But if you need to send it over any sort of API - Protobuf, XML, JSON, JAX-WS, JAX-RS, RMI, .NET Remoting etc - you'll bump into the core issue with lazily loaded data.

In the best case, there will be some data missing for the client and you'll have to implement specific API operations to grab the loose ends. In the worst case, your serializer can't even handle the proxies the ORM solution puts around your domain classes - Ever tried sending out an entity as JSON that has a one-to-many relationship with ASP.NET MVC3 and EF4?

I have written an article with this rant, although a bit more java centric if you want to see more examples of scenarios where ORM as it is today is inherently bad: alexmreis.com/?p=171

Would love to know your and other reader's opinions on the topic.

Sincerely,

Alex Reis

Re: Fair enough, for the sweet spot by emmanuel buah

Alex, you are right but you are also wrong. You are assuming that the ORM is suppose to do the data & query translation for you as well as data format representation. Funny, my wife asked me yesterday while having a beer why we need ORM to do our translation. The main aim of the ORM is to make data translation easy (query translation is a sub required task of data translation). Data in our relation db are in rows and columns, while we need that data in a different structure (objects) in our code. That's the core domain of an ORM. Now you are right in the fact that if we want to push ORM to another level, data format adopters could be provided to format your data to a particular format but that really is an add-on benefit. In .net world, library like Json.Net, linq projection to anonymous types have made data format translation very easy. Eg

var orders = unitOfWork.Orders.Where(o => o.DueDate < today).WithAggregate("WithCustomer").Select(o=> new
{ id= o.id, producttype = o.productype, customer = new { id= o.Customer.Id fullname = o.Customer.FullName }};

var ordresInjson = JObject.From(orders);

I think this is why ORM vendors haven't felt the need to do so. Also, formats come and go. There was xml, then json. Who knows what will come tomorrow. Some companies/individuals have their own formats.

Re: Fair enough, for the sweet spot by Binoj Antony

I second that Emmanuel, ORM solves the problem of "impedence mismatch" not converting to different output formats.
In fact one of my developers was using the ORM that I had created a while back xmldatamapper.codeplex.com he wanted me to handle conversion of float to decimal, I had a hard time explaining to him that ORM will convert as-is, at architectural level to introduce data type conversion in the ORM is incorrect.

missing a major point by phloid domino

As written the article addresses one performance concern, but misses a major point about ORM's in general: separation of query logic from code.

Remember that the 'M' stands for mapping: mapping an object model to a relational schema. If this is done, then application code can work with an object model and the interfaces of said objects. (of course it helps to have a well designed object model, not just a lot of 'active records' with setter/getters aka properties which miss the whole point of objects).

Yes, an ORM can also provide caching and even transactional services, but those are not the core premise of ORM.

My point, which is often overlooked, is if query expressions are in (application) code, then much of the ORM benefit is lost. This is true whether using LINQ, or some annotations based implementation.

If I have to recompile and rebuild code to effect a change in the mapping, then all that has happened is a reinvention of the wheel known as SQL.

And guess what, SQL ain't all that bad -- I'd rather embed SQL in code than learn yet another quirky vendor proprietary pseudo query language.


The real issue, as always, is design. Yes, business logic requirements can change, which would often imply query logic changes. As such, there's more benefit to externalizing business logic than embedding it in code.

I say this as one who has written rules based business logic driven by external data, as well as having investigated several common ORM's.


Things like LINQ and the ADO.NET framework, even the entity framework, often miss this point: we don't need another query syntax, we don't need another set of API's for talking to a database, we don't need to substitue hard coded SQL for hardcoded entanglement with some vendor's framework.

What we do need is more separation of concern, and externalizing business logic and persistence where it makes sense: when the flexibility of making changes is worth the extra indirection.

If performance matters more, just hard code it.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT