BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Comparison of Event Sourcing with Stream Processing

Comparison of Event Sourcing with Stream Processing

Bookmarks

Event sourcing and CQRS are two patterns that has emerged in the Domain-Driven Design (DDD) community. Stream processing builds on similar ideas but has emerged in a different community, Martin Kleppmann noted in his presentation at the Domain-Driven Design Europe conference earlier this year comparing event sourcing with stream processing.

Kleppmann, with a background in building large scale data system for Internet companies but now at University of Cambridge, notes that when comparing enterprise software with the systems of Internet companies the main distinction is in where the complexity is found. On the enterprise side the complexity mainly lies in the domains and business logic. In Internet companies the domains are often relatively simple. Instead it’s the vast volume of data coming in at really high speed that leads to complexity in the data infrastructure. Although there is complexity on both sides but for very different reasons Kleppmann has found that the solutions have some similarity with event sourcing on the enterprise side and stream processing, or sequences of immutable events, on the Internet side.

One tool for handling event streams that Kleppmann has been involved with is Kafka. It was originally developed for aggregating log files and processing events. The principle it’s built on is very simple, he compares with an append only log file where new messages or events are appended. This creates an ordered sequence of records that in general can handle any streams of events. A significant feature of Kafka is its capability of handling events at very high volumes when distributed over multiple servers.

One interesting implementation using Kafka which Kleppmann thinks is quite close to event sourcing is using event streams for database change events where an event is emitted for each record updated in the database, e.g. as a result of an update of an aggregate or entity. This way a consumer can read events as they are published and apply these to its local version of the data. He notes that this use case is similar to database replication where writes to a leader database are replicated to a database replica.

Originally Kafka was designed for use cases where losing maybe a percent of all messages was acceptable but with more operational experience and maturity the durability expectation has increased so that today’s replication strength is at the level of many relational database replication systems. Work is in progress of adding transaction support to Kafka which would allow for atomically publishing of messages to several partitions in one atomic action. Kleppmann notes that although event streams and processing is still a fairly young technology the trend is definitely that it’s moving towards more database like guarantees.

What Kleppmann finds especially interesting comparing event sourcing with stream processing is that these two similar ideas have appeared in two very different communities that barely talk to each other. This suggests that there may be some fundamental underlying ideas that make this important.

In a presentation at QCon London Kleppmann talked about using event streams and Kafka for keeping data in sync across heterogeneous systems.

The slides from Kleppmann’s presentation are available here.

Next year’s Domain-Driven Design Europe conference is scheduled for late January 2017.

Rate this Article

Adoption
Style

BT