Event sourcing is a way of persisting your application’s state by storing the history that determines the current state of your application. Keeping consistency in even sourcing is hard due to a distributed nature of any type of event driven systems. In this document let’s assume we’re using event sourcing pattern, where stream of events makes the source of truth for entities (and aggregates).

Let's consider first scenario. For example, let's assume a system receiving two commands at the same time. One is to delete the entity, and another is to rename it. Not necessarily such a concurrency issue can raise problems. It may be okay to first remove the entity and then to rename, just as it is okay to first rename and then to delete it. It takes a good understanding of the business domain to tell whether such order of events is acceptable or not. Different types of entities may or may not accept being renamed after being removed. Sometimes it just may not matter at all. In this scenario it may not matter even in what order two commands are recorded in the event stream, however, in the latter examples I’m assuming that both event stream and event store are capable of keeping the same order of events.

In the second scenario, let's imagine we have a concurrency issue where two customer entities are attempted to be created with the same e-mail address. The e-mail address in this case may be understood as a unique key, or an external identifier. Preventing two entities of the same external key being present in a system may require a more complex solution whereas pessimistic locking is required. Given that the command query segregation pattern is in place, it may be necessary first to obtain an aggregate read/write lock first with automatic timeout. Issuing distributed locks requires relatively complex implementation. Then forcibly update the view in order to make sure a customer of a given e-mail address does not exist yet, then accept and emit the customer created event, releasing the lock and returning the command result back to the requestor.

Another solution to a problem where two requests attempt to create a unique entry is a concept of writing ahead of the log. In this concept each service writes a log entry stating that it attempted to create a specific entity with a specific internal globally unique identifier generated by the write model. If another service creates similar entity making a similar entry then it is the responsibility of the read model to deduplicate the events using the components of unique key (for instance e-mail address) and present the entity which creation attempt is present in the event store. Finally, if the write model confirms to either writer that the internal identifier of any entries agrees with the read model, the write model emits the entity created event. The request which cannot confirm with the read model that specific entity details and a specific internal identifier is pending creation, request creating entity fails.

In the third scenario let's say we have a concurrency issue where two commands try to update the same entity which is already created. In one solution we can attempt to merge commands, which is challenging and cannot be always completed in a distributed system. Alternatively, a second solution uses optimistic locking may be used where each command first updates the read model and then carries its own expected aggregate (or the whole read model) version number. The update commands are is issued with an expected aggregate (or model) version, so if another request updates first, the other fails. The failure condition can be detected either by the write model or the read model if expected version of an aggregate (or model) does not match the aggregate version observed earlier. In third solution both write models can post their entities which can be reconciled by calling the read model just before returning command result. If any of the commands fails to observe an update according to their success criteria, the request fails.

How to resolve concurrent commands in event sourcing

In any scenario, one should bear in mind any type of general approaches:

  • Try to design a system that doesn’t require distributed consistency. Unfortunately, that’s barely possible for complex systems.
  • Try to reduce the number of inconsistencies by modifying one data source at a time.
  • Consider event-driven architecture. A big strength of event-driven architecture in addition to loose coupling is a natural way of achieving data consistency by having events as a single source of truth or producing events as a result of change data capture.
  • More complex scenarios might still require synchronous calls between services, failure handling, and compensations. Know that sometimes you may have to reconcile afterward. Design your service capabilities to be reversible, decide how you will handle failure scenarios and achieve consistency early in the design phase.

- Comments

- Leave a Comment

- Contact Us

If you need more info, please speak with us by using the contact details provided below, or by filling in the contact form.

Our Location

71-75 Shelton Street, London, GB

- Write to us

Success! Your message has been sent to us.
Error! There was an error sending your message.