BT

Introducing Biggy: An ORM-like Library for Document Databases

by Jonathan Allen on Mar 16, 2014 |

When working with relational data, there are several options for lightweight databases such as SQLite and SQL Server Compact. But when your needs are better met by a document style database there is a surprising lack of options. Hence the creation of Biggy.

Biggy was started by Rob Conery to be a .NET equivalent to Node’s NeDB database. Since then it has evolved in an ORM-like library, but with document databases in mind. On the surface developers work with what appear to be normal lists. But these lists are backed by a document based storage layer such as:

  • On disk in a JSON file (one per record type T)
  • In Postgres database using the built-in JSON data type
  • In SQL Server using regular text storage

Other storage options such as MongoDB and AzureTableStorage are currently being developed.

Here is an example for creating a new table, named “products”, in Postgres and storing a record in it.

var products = new PGList<Product>("tekpub", "products");

var newProduct = new Product();

//fill it in...

products.Add(newProduct);

Since a copy of all of the records are stored in memory, normal LINQ queries can be used entirely in memory. But if you want something a bit more power, you can turn on full text searches for specific rows.

To do this you need to apply the FullText attribute to the property you want to be indexed. This will “split out the text in this column” when the table is created so that it can be separately indexed. And from there, full text searches can be run against it.

Use Cases

Obviously you shouldn’t be using something like this to hold the entire database in memory. Its primary use is for what Rob Conery calls “input data”. This would be rarely changing data that needs to be instantly available such as product catalogs. In this sense Biggy works a lot like an updatable cache.

One of the big differences is that, unlike a cache, it is always hot. The normal design pattern is to load the entire table from disc or database when the application starts up. With some basic benchmarks, he was able to load 100,000 records from Postgres in under a second.

It does, however, have some of the same limitations as caches. For example, if you are running multiple instances of an application there is no way to keep the in-memory instances synchronized.

The source code for Biggy is available on GitHub and has been released under an open source license. The API is still in flux, so it isn’t available on NuGet yet.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Close... very close by Rob Conery

> Obviously you shouldn’t be using something like this to hold the entire database in memory. Its primary use is for what Rob Conery calls “input data”. This would be rarely changing data that needs to be instantly available such as product catalogs. In this sense Biggy works a lot like an updatable cache.

A little bit off here. If your DB is small then it wouldn't matter if it's all in-memory. It's primary use is not "input data" - but "transactional" data - stuff that changes often like Product Catalogs etc.

Rarely-changing data is the high-write stuff, like an order record. That you wouldn't want to keep in memory as you don't need to query it much.

Also, Biggy supports relational and document storage equally.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

1 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT