BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Transactional NoSQL Database

Transactional NoSQL Database

Bookmarks

For nearly its entire lifetime, the Java platform has gone through great pains to make database persistence as seamless as possible to developers. Whether you’ve cut your teeth on the earliest JDBC specifications, EJBs, O/R mappers like Hibernate, or more recent JPA specifications, you more likely than not had encountered a relational database along the way. And perhaps equally likely, you’ve also come to understand the differences between how data is modeled from an object-oriented point-of-view and how data is stored in a relational database (sometimes referred to as the impedance mismatch by developers).

More recently however, NoSQL databases have come along, in many cases providing a more natural fit from a modeling perspective. In particular, document-oriented databases (e.g. MarkLogic, MongoDB, CouchDB, etc.), with their rich JSON and/or XML persistence models have effectively eliminated this impedance mismatch. And while this has been a boon to developers and productivity, in some cases developers have come to believe that they’d need to sacrifice other features to which they have become accustomed, such as ACID transaction support. The reason is that many NoSQL databases do not provide such capabilities, citing a trade-off to allow for greater agility and scalability not available in traditional relational databases. For many, the rationale for such a tradeoff is rooted in what is known as the CAP theorem.

CAP Theorem

Back in the year 2000, Eric Brewer posited the notion that is now known in technology circles as the CAP theorem. In it he discusses three system attributes within the context of distributed databases as follows:

  • Consistency: The notion that all nodes see the same data at the same time.
  • Availability: A guarantee that every request to the system receives a response about whether it was successful or not.
  • Partition Tolerance: A quality stating that the system continues to operate despite failure of part of the system.

The common understanding around the CAP theorem is that a distributed database system may only provide at most 2 of the above 3 capabilities. As such, most NoSQL databases cite it as a basis for employing an eventual consistency model (sometimes referred to as BASE – or basically available, soft state, eventual consistency) with respect to how database updates are handled.

One common misconception however, is that because of the CAP theorem, it is not possible to create a distributed database with ACID transaction capability. As a result, many people assume that with distributed NoSQL databases and ACID transactions never the twain shall meet. But in fact this is not the case and in fact Brewer himself clarified some of his statements, specifically around the concept of consistency as it applies to ACID right here on InfoQ

As it turns out, ACID properties are important enough that their applicability either has been addressed or is being addressed by the marketplace with respect to newer database technology. In fact, no less an authority on distributed web-scale data storage than Google, authors of the Big Table whitepaper and implementation, has been in the process of both making a case for and implementing distributed DB transactional capability by way of its Spanner project.

As a result, transactions have been making their way back into the discussion of NoSQL. This is good news if you’re a Java developer looking for the agility and scale of NoSQL but still want the enterprise expectations of ACID transactions. In this article we will explore one NoSQL database in particular, MarkLogic, and how it provides multi-statement transaction capabilities to the Java developer without sacrificing the other benefits we have come to expect from NoSQL, such as agility and scale-out capability across commodity hardware. Before we do however, let’s first take a step back and re-familiarize ourselves with the concepts of ACID.

ACID Support

We’ll begin with the textbook definition of the acronym known as ACID. We’ll define each term and discuss the context within which each is important:

  • Atomicity: This feature, which provides the underpinnings of the concept of transactions, states that the database must provide facilities for grouping actions on data that may occur in an “all or nothing” fashion. So if for instance a transaction is created in which one action credits an account and a related action debits another account, they must be guaranteed to occur (or not occur) as a single unit. This capability must be true during not only normal runtime operations but also in unexpected error conditions.
  • Consistency: A property closely related to atomicity which states that transactions performed on a database must transition the database from one valid state to another (from a systemic point of view). So if for instance, referential integrity or security constraints had been previously defined on a portion of data that is affected by a transaction, consistency guarantees that none of those defined constraints are violated as a result of the intended transaction.
  • Isolation: This feature applies to observed behavior around database events happening in a concurrent manner. It is meant to provide certain assurances around how one particular user’s database actions may be isolated from another’s. For this particular ACID property, there are often varying concurrency control options (i.e. isolation levels) that not only differ from one database to another, but sometimes also within the same database system. MarkLogic relies on a modern technique known as multi-version concurrency control (MVCC) to achieve isolation capability.
  • Durability: This ensures that once transactions have been committed to the database, they will remain so even in cases of unexpected interruptions of normal database operations (e.g. network failure, power loss, etc.). Essentially this is an assurance that once a database has acknowledged committing data, it will not “lose” the data.

In a database with full ACID support, all of the above properties will often work in conjunction, relying on concepts such as journaling and transaction check-pointing to protect against data corruption and other undesirable side-effects.

NoSQL and Java - A Basic Write Operation

Now that the textbook definition section is behind us, let’s get a little bit more concrete and explore some of these properties in the form of Java code. As mentioned previously, our exemplar NoSQL database will be MarkLogic. We’ll start with some of the housekeeping items first.

When coding in Java (or nearly any other language for that matter), to establish dialogue with a database, the first thing we must do is open a connection. In the world of MarkLogic, this is done by way of a DatabaseClient object. To obtain such an object, we employ the Factory Pattern and interrogate a DatabaseClientFactory object as follows:

// Open a connection on localhost:8072 with username/password
// credentials of admin/admin using DIGEST authentication
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8072, "admin", "admin", Authentication.DIGEST);

Once that is established, there is another level of abstraction to work with. MarkLogic provides a number of features in its Java library and as such, it is helpful to logically group these features together for organization purposes. One of the ways in which we do this at the DatabaseClient level is to group functionality into a number of Manager classes. For our first example, we will be working with an XMLDocumentManager object as a means by which to perform a basic insert operation. To obtain an instance of XMLDocumentManager we again turn to a factory method, but this time from the DatabaseClient itself as follows:

// Get a document manager from the client 
XMLDocumentManager docMgr = client.newXMLDocumentManager();

When dealing with data, MarkLogic is what is considered a “document oriented” NoSQL database. What this means from a Java point of view is that instead of relying on an O/R mapper to serialize complex objects into rows and columns of a relational database, objects may simply be serialized to a language-neutral and self-describing document or object format, without having to go through complex mappings. More concretely, this means that as long as your Java objects may be serialized into either XML (e.g. via JAXB or other means) or JSON (e.g. via Jackson or other libraries), it may be merely persisted to the database as-is, without having to pre-model within the database.

Let’s go back to code and see:

// Establish a context object for the Customer class 
JAXBContext customerContext = JAXBContext.newInstance( com.marklogic.samples.infoq.model.Customer.class);
// Get a new customer object and populate it
Customer customer = new Customer();
customer.setId(1L);
customer.setFirstName("Frodo") .setLastName("Baggins") .setEmail("frodo.baggins@middleearth.org") .setStreet("Bagshot Row, Bag End") .setCity("Hobbiton") .setStateOrProvince("The Shire");
// Get a handle for round-tripping the serialization
JAXBHandle customerHandle = new JAXBHandle(customerContext);
customerHandle.set(customer);
// Write the object to the DB
docMgr.write("/infoq/customers/customer-"+customer.getId()+".xml", customerHandle);
System.out.println("Customer " + customer.getId() + " was written to the DB");

The above example uses JAXB which is one the ways to present a POJO to MarkLogic for persistence (others include JDOM, raw XML strings, JSON and others). JAXB requires us to establish context as per the javax.xml.bind.JAXBContext class, which is done in the first line of the code. For our first example, we’re working with a JAXB annotated Customer class and have created an instance and populated it with some data (NB: the example is for illustration purposes, so please mentally refrain from critiques on how best/not best to model the class). After that, we’re back to MarkLogic specifics. To persist our customer object, we have to first get a handle on it. Since we’ve chosen the JAXB approach for our example, we create a JAXBHandle using the previously instantiated context. Finally, we merely write the document to the database using our previously created XMLDocumentManager object, and make sure we give it a URI (i.e. key) for identity purposes.

After the above operation is complete, a customer object will be persisted inside of the database. The screen shot below shows the object in MarkLogic’s query console:

What’s notable (aside from the fact that our first customer is a famous Hobbit) is that there were no tables to create and no O/R mappers to configure and use.

A Transaction Example

OK, so we’ve seen a basic write operation, but what about the transaction capability? For this, let’s consider a simple use case.

Let’s say we have an e-commerce site called ABC-commerce. On its website, nearly anything may be purchased, as long as the item begins with the letter A, B or C. Like many modern e-commerce sites, it’s important that users can see an up-to-date and accurate view of inventory. After all, when making a purchase for artichokes, bongos or chariots, it’s important that consumers accurately know what’s in stock.

To help meet the above capability, we can turn to our ACID properties to make sure that when an item is purchased, the inventory reflects this purchase (in the form of an inventory reduction), and that this is done as an “all or nothing operation” from the database’s point-of-view. This way, whether the purchase transaction succeeds or fails, we’re guaranteed to have an accurate state of the inventory at a given point in time after the operation.

Let’s look at some code again:

client = DatabaseClientFactory.newClient("localhost", 8072, "admin", "admin", Authentication.DIGEST); 
XMLDocumentManager docMgr = client.newXMLDocumentManager();
Class[] classes = {
com.marklogic.samples.infoq.model.Customer.class,
com.marklogic.samples.infoq.model.InventoryEntry.class,
com.marklogic.samples.infoq.model.Order.class
};
JAXBContext context = JAXBContext.newInstance(classes);
JAXBHandle jaxbHandle = new JAXBHandle(context);
Transaction transaction = client.openTransaction();
try
{
// get the artichoke inventory
String artichokeUri="/infoq/inventory/artichoke.xml";
docMgr.read(artichokeUri, jaxbHandle);
InventoryEntry artichokeInventory = jaxbHandle.get(InventoryEntry.class);
System.out.println("Got the entry for " + artichokeInventory.getItemName());
// get the bongo inventory
String bongoUri="/infoq/inventory/bongo.xml";
docMgr.read(bongoUri, jaxbHandle);
InventoryEntry bongoInventory = jaxbHandle.get(InventoryEntry.class);
System.out.println("Got the entry for " + bongoInventory.getItemName());
// get the airplane inventory
String airplaneUri="/infoq/inventory/airplane.xml";
docMgr.read(airplaneUri, jaxbHandle);
InventoryEntry airplaneInventory = jaxbHandle.get(InventoryEntry.class);
System.out.println("Got the entry for " + airplaneInventory.getItemName());
// get the customer
docMgr.read("/infoq/customers/customer-2.xml", jaxbHandle);
Customer customer = jaxbHandle.get(Customer.class);
System.out.println("Got the customer " + customer.getFirstName());
// Prep the order
String itemName=null;
double itemPrice=0;
int quantity=0;
Order order = new Order().setOrderNum(1).setCustomer(customer);
LineItem[] items = new LineItem[3];
// Add 3 artichokes
itemName=artichokeInventory.getItemName();
itemPrice=artichokeInventory.getPrice();
quantity=3;
items[0] = new LineItem().setItem(itemName).setUnitPrice(itemPrice).setQuantity(quantity).setTotal(itemPrice*quantity);
System.out.println("Added artichoke line item.");
// Decrement artichoke inventory
artichokeInventory.decrementItem(quantity);
System.out.println("Decremented " + quantity + " artichoke(s) from inventory.");
// Add a bongo
itemName=bongoInventory.getItemName();
itemPrice=bongoInventory.getPrice();
quantity=1;
items[1] = new LineItem().setItem(itemName).setUnitPrice(itemPrice).setQuantity(quantity).setTotal(itemPrice*quantity);
System.out.println("Added bongo line item.");
// Decrement bongo inventory
bongoInventory.decrementItem(quantity);
System.out.println("Decremented " + quantity + " bongo(s) from inventory.");
// Add an airplane
itemName=airplaneInventory.getItemName();
itemPrice=airplaneInventory.getPrice();
quantity=1;
items[2] = new LineItem().setItem(itemName) .setUnitPrice(itemPrice) .setQuantity(quantity) .setTotal(itemPrice*quantity);
System.out.println("Added airplane line item.");
// Decrement airplane inventory
airplaneInventory.decrementItem(quantity);
System.out.println("Decremented " + quantity + " airplane(s) from inventory.");
// Add all line items to the order
order.setLineItems(items);
// Add some notes to the order
order.setNotes("Customer may either have a dog or is possibly a talking dog.");
jaxbHandle.set(order);
// Write the order to the DB
docMgr.write("/infoq/orders/order-"+order.getOrderNum()+".xml", jaxbHandle);
System.out.println("Order was written to the DB");
jaxbHandle.set(artichokeInventory);
docMgr.write(artichokeUri, jaxbHandle);
System.out.println("Artichoke inventory was written to the DB");
jaxbHandle.set(bongoInventory); docMgr.write(bongoUri, jaxbHandle);
System.out.println("Bongo inventory was written to the DB");
jaxbHandle.set(airplaneInventory);
docMgr.write(airplaneUri, jaxbHandle);
System.out.println("Airplane inventory was written to the DB");
// Commit the whole thing
transaction.commit();
}
catch (FailedRequestException fre)
{
transaction.rollback();
throw new RuntimeException("Things did not go as planned.", fre);
}
catch (ForbiddenUserException fue)
{
transaction.rollback();
throw new RuntimeException("You don't have permission to do such things.", fue);
}
catch (InventoryUnavailableException iue)
{
transaction.rollback();
throw new RuntimeException("It appears there's not enough inventory for something. You may want to do something about it...", iue); }

In the above example, we do a number of things within the context of a single transaction as follows:

  • Read the relevant customer and inventory data from the database
  • Create an order record for the given customer consisting of three line-items
  • For each line item, also decrement the inventory for corresponding the item names and quantities
  • Commit the whole thing as a single transaction (or roll it back in case of failure)

The code semantics do this as an all-or-nothing single unit of work, even though there are multiple updates. If anything goes wrong with any part of the transaction, a rollback is performed. Additionally, the queries that are done (to get the customer and inventory data) are also within the scope of the transaction’s visibility. This also highlights another concept around MarkLogic’s transactional capability, specifically around multi-version concurrency control (MVCC). What this means is that the view of the query that is performed (e.g. to get the inventory in this case) is valid as of exactly that point in time in the database. Additionally, because this is a multi-statement transaction, MarkLogic also does something it doesn’t normally do with read operations and actually creates a document-level lock (usually reads are lock-free) so as to prevent a “stale read” scenario in concurrent transaction processing.

So if we run the code above successfully, the following output would result:

Got the entry for artichoke 
Got the entry for bongo
Got the entry for airplane
Got the customer Rex
Added artichoke line item.
Decremented 3 artichoke(s) from inventory.
Added bongo line item.
Decremented 1 bongo(s) from inventory.
Added airplane line item.
Decremented 1 airplane(s) from inventory.
Order was written to the DB
Artichoke inventory was written to the DB
Bongo inventory was written to the DB
Airplane inventory was written to the DB

What would result in the database would be an order with three line-items, as well as updates to inventory items to reduce their counts. To illustrate, following is a resulting order XML, as well as one of the inventory items (the airplane) appropriately decremented:

 

We see now that the airplane inventory count is down to 0 since we only had one in stock. So what we can do now is run the same program again and force an exception (albeit somewhat contrived) to the transaction process by complaining that there is no inventory to meet the request. In this case, we choose to abort the whole transaction and get the following error.

Got the entry for artichoke 
Got the entry for bongo
Got the entry for airplane
Got the customer Rex
Added artichoke line item.
Decremented 3 artichoke(s) from inventory.
Added bongo line item.
Decremented 1 bongo(s) from inventory.
Added airplane line item.
Exception in thread "main" java.lang.RuntimeException: Things did not go as planned.
at com.marklogic.samples.infoq.main.TransactionSample1.main(TransactionSample1.java:148)
Caused by: java.lang.RuntimeException: It appears there's not enough inventory for something. You may want to do something about it...
at com.marklogic.samples.infoq.main.TransactionSample1.main(TransactionSample1.java:143)
Caused by: com.marklogic.samples.infoq.exception.InventoryUnavailableException: Not enough inventory. Requested 1 but only 0 available.
at com.marklogic.samples.infoq.model.InventoryEntry.decrementItem(InventoryEntry.java:61)
at com.marklogic.samples.infoq.main.TransactionSample1.main(TransactionSample1.java:103)

The cool thing that happens here is that no new updates are made to the database as the whole thing is rolled back. This is known as a multi-statement transaction. If you come from a relational world you’re used to such behavior. However, in the NoSQL world this is not always the case. MarkLogic however does provide this capability.

Now the above example leaves out many other particulars of a real world scenario as we might choose different actions around inventory not being available (e.g. back-ordering). However, the case of requiring atomicity is something that’s very real in many business cases and is both difficult and error prone without the capability of multi-statement transactions.

Optimistic Locking

In the above example, the logic is straightforward and very predictable, and in fact exercises all four of the properties of ACID. However, the careful reader will notice that I alluded to “something that MarkLogic doesn’t normally do” in the form of document locking for a read operation. As a side-effect of MVCC, read operations are typically lock-free. This is accomplished by providing the reader visibility into a document at a specific point in time, even if an update is occurring during the time of the read request. The view of the document as per the read request is preserved without having to inhibit the write operation by way of a lock. However as already mentioned, in certain cases individual documents may be effectively locked for reads. One such way is to perform the read within the context of a transaction block as in the example above. Why might we do this? In a highly concurrent application with transactions happening in milliseconds or less, we want to ensure that when we read an object with the intent to update it, that some other thread won’t change its state before we get to complete the operation. In other words, we want to also ensure Isolation with respect to our transaction. So when we do the read inside of the transaction block we signal such intent and a lock occurs to ensure the consistent view throughout the timeline of the transaction.

As most developers know however, locking comes at a price, even for a single document and even when there’s no real lock contention between concurrent operations. In fact, we may by design know that the way in which our application behaves and the speed in which operations occur, that the likelihood of such an overlapping occurrence is low. However, we may still want a fail-safe just in case there is such overlap. So what do we do when we want to perform a transactional update but against a view of an object’s state without the overhead of locking during the read operation? For this there are two things we have to do. The first is to take the read operation outside of the transaction context so it’s not implicitly locked. The second thing to do is to work with what is called a DocumentDescriptor object. The purpose of this object is get a snapshot of an object’s state at a point in time, such that the server can determine if an update was made to an object between the time when an object was read and when a subsequent update is requested. This is accomplished by obtaining the document descriptor in conjunction with a read operation, and then passing the same descriptor to a subsequent update operation as in the following code example:

JAXBHandle jaxbHandle = new JAXBHandle(context); 

// get the artichoke inventory
String artichokeUri="/infoq/inventory/artichoke.xml";
// get a document descriptor for the URI
DocumentDescriptor desc = docMgr.newDescriptor(artichokeUri);
// read the document but now using the descriptor information
docMgr.read(desc, jaxbHandle);
// etc…
try
{
// etc…
// Write the order to the DB
docMgr.write("/infoq/orders/order-"+order.getOrderNum()+".xml", jaxbHandle);
System.out.println("Order was written to the DB");
// etc….
jaxbHandle.set(artichokeInventory);
docMgr.write(desc, updateHandle); // NOTE: using the descriptor again
// etc….
transaction.commit();
}
// etc…
catch (FailedRequestException fre)
{
// Do something about the failed request
}

Doing so will ensure that any reads are not creating respective locks and that locks are only done with the update operation. In this case however, we’re still technically susceptible to another thread “sneaking in” and doing an update to the same document between the time we read it and the time we update it. However using the technique above, if this happens, an exception will be thrown to let us know that this happened. This is what is known as optimistic locking, which technically speaking is really the act of not locking during a read because we’re optimistic that changes won’t occur when we do a subsequent update. When we do this, we’re effectively telling the database that we believe that most of the time we don’t expect an isolation violation, but in case there’s a problem, we’d like it to keep an eye out for things. The upside is that we won’t be engaging in lock semantics for reads. However in the (what we hope is) rare event that the same object we’ve read is updated by another thread before we’ve had a chance to update it, MarkLogic will keep track of update versions behind the scenes and let us know if someone else has beat us to the punch in the form of throwing a FailedRequestException.

One other thing to note here is that optimistic locking has to be explicitly stated as being required for updates and deletes, essentially telling the server to keep track of “versions” behind the scenes. A full example for setting the server configuration, as well as exercising optimistic locking may be found  here.

Developers who use software version control tools (e.g. CVS, SVN, Git) are familiar with such behavior when working on code modules. Most of the time, we “check out” a code module without locking it, knowing that others usually won’t be working on the same module concurrently. However, if we do try to commit a change that was made against what the database considers an “old” copy, it will tell us that we cannot complete the operation because someone else has updated it since we read it.

Conclusion

The examples provided above are simple ones, however the topics – ACID transactions, optimistic locking – are by no means trivial and are typically not associated with NoSQL databases. The goal of MarkLogic server however is to provide these very powerful capabilities in a way that is easy for developers to leverage without sacrificing the power of the features themselves. For more information on these and other topics feel free to visit this website. For the multi-statement transaction example used in this article, please visit GitHub.

About the Author

Ken Krupa, Chief Field Architect, MarkLogic Corporation has 25 years of professional IT experience, Mr. Krupa has a unique breadth and depth of expertise within nearly all aspects of IT architecture. Prior to joining MarkLogic, Ken consulted at some of the largest North American Financial institutions during difficult economic times, advising senior and C-level executives. Prior to that, he consulted with Sun Microsystems as a direct partner and also served as Chief Architect of GFI Group, a Wall St. inter-dealer brokerage. Today, Ken continues to pursue both individual and community-based engineering activities. Current intellectual pursuits involve community science as well as the study of applying purely declarative, rules-based logic frameworks to complex business and IT problems. Ken communicates with the world via Twitter: @kenkrupa and his blog: kenkrupa.wordpress.com

Rate this Article

Adoption
Style

BT