BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Event Sourcing in an Unreliable World

Event Sourcing in an Unreliable World

Leia em Português

Bookmarks

Examples of event sourced systems are commonly from domains like e-commerce that are process-oriented with incoming commands that generate events, and where we are in control of the process. But there are domains without processes, where we are collecting external events; domains that are intrinsically unreliable with event sources and transports that are unreliable, Lorenzo Nicora explained at the recent Microservices Conference µCon London 2017. Examples include logistics and transport domains as well as IoT and mobile apps that depend on an unreliable network and may be temporarily disconnected.

Nicora, platform engineer at Buildit@wipro digital, noted that in these unreliable domains things happens in the external world that are out of our control; events can be missing or arrive out of order. We just have to gather events as they arrive and afterwards try to draw a picture of the external world that makes sense. Essentially, we have to drop the distinction between commands and events that we see in process-oriented domains, and start to accept all events coming in, validate and store them as fast as possible, and after that apply logic to build the read models. Nicora believes this to be the same type of mind shift as when moving from ACID to BASE in the data storage world, and he calls this approach:

Write Fast, Think Later.

Validating events in this scenario is mainly a security check to prevent the system from forged or malicious events and is completely stateless. It is therefore possible to run both validation and storing of events in parallel, which is great from a scalability perspective. Using a microservices architecture, separate services can process writes of events, build read models and serve queries which gives a very scalable solution since every service is independently scalable.

Advantages when storing all events in these unreliable domains include the possibility to:

  • Reorder events if events are arriving late and out of order
  • Guess missing events
  • Rebuild read models if you find inconsistencies in models or when late events are received
  • Delay read models to give events time to arrive

When ordering events, the only meaningful timestamp on an event is the time it was created in the source. The problem is that this timestamp is only reliable per device emitting events, which means we have no reliable global event order, and there is no easy way around this. If ordering between events from different devices is important, we have to deal with that in logic after the events are stored. Nicora also mentions that with this design the read models will always be delayed; how much depends on the domain and the implementation.

Nicora concluded by noting that if you are working in an unreliable world with high scalability requirements, then this weak write consistency model may be a good fit. He emphasized though that when using it, we should not try to make sense of events on write; instead, we should focus on building consistent read models.

Next year’s conference is scheduled for November 5-6, 2018.

Rate this Article

Adoption
Style

BT