BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News CQRS, Read Models and Persistence

CQRS, Read Models and Persistence

This item in japanese

Bookmarks

Storing events in a relational database and creating the event identity as a globally unique and sequentially increasing number is an important and maybe uncommon decision when working with an event-sourced Command Query Responsibility Segregation (CQRS) system Konrad Garus writes in three blog posts describing his experiences from a recent project. In his writing he focuses on how to achieve consistency, how they populate the read models by consuming an event stream and the persistence techniques used in the models.

Garus describes the system as intended for internal use in companies with hundreds or thousands of users and notes it’s relatively low scale, but also that the very simple implementation, although its limited scalability, without problem is capable of handling many hundreds of events per minute.

When using a sequential event identification read models can be updated by regularly polling the event store using the identity of the last already processed event as a starting point, reading a batch of events from the store and processing them in sequence in a single thread. This technique makes the read model responsible for its own state, avoiding any infrastructure or server-side dependencies.

One advantage Garus sees with this model where events are always processed in the order they are created is that although a read model may be behind a few seconds or longer it is always consistent. It’s also possible to calculate how far behind it is by comparing the identity of the last processed event with the current maximum.

Garus also notes that this model makes it possible to mitigate a problem with eventually consistency from a client side perspective. By returning the identity of the last written event created running a command, a client can then use this identity to wait for a read model to be updated reflecting the state after the same event has been processed.

Garus believes that we have a tendency to use tools that are too complex for the actual customer requirements at hand and he thinks their experiences from the described project shows that also when the use of CQRS can be warranted sometimes a simple implementation can solve the problem.

In a discussion at Reddit started after the first blog post it is questioned what value a relational database provide and that the optimal storage is a stream, e.g. an append-only event log. Advantages and disadvantages of Event sourcing instead of a relational model are also raised in the same discussion.

In a blog post late 2014, Vaugh Vernon also proposed using a relational database, PostgreSQL, for storing serialized DDD aggregates, among other things due to its support for ACID transactions and JSON.

Earlier this year Vladimir Khorikov earlier compared three types of CQRS.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • The Database as a Value

    by Joakim Tengstrand,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi!

    Nice article. I think Rich Hickey has nailed it with the database Datomic. It is an immutable database where everything is stored as facts. A fact is a smaller piece than a row in a table and is connected to a point in time through a transaction ID. Very simple and powerful with good performance! Take a look: www.infoq.com/presentations/Datomic-Database-Value

  • Not to be dismissive

    by Will Hartung,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    But I generally don't like these style of InfoQ articles. It's basically a "book review" of somebody else's blog posts, and doesn't really add anything to what was said.

    A better way to improve this kind of message, rather than summarizing the blog posts, would be to do the latter -- simply open up a topic on recent discussion on CQRS (in this case) and a list of links to other recent posts on the topic (and perhaps how they might be related to each other). Summarizing the posts themselves is just padding the article, frankly, and you don't really realize "Oh, this isn't really original content at all, I need to go to the blogs" until you're deeper in the article, which is a waste of time.

  • Re: Not to be dismissive

    by Lars Kemmann,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It's also unfortunate that what we often get in these articles is not as deep or precise as the original post, since it is just written as a summary.

    I agree with your suggestion for how to improve the editorial content.

  • Re: Not to be dismissive

    by Ronald Miura,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Well, anything in the 'News' section - in contrast to the 'Articles' section - is not an full-blown, original article. One may argue that the sections presentations aren't very distinct, but it is written at the top of the page. As long as it has the links to the original articles, and it is clear who the original author is, this shouldn't be an issue.

    And personally, I actually like the summaries, since I usually just skim over it to see if anything picks my interest, instead of reading a 2000-word blog post in font 24pt (oh, Medium... :))

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT