Problems of the “One Database per Microservice” Approach | Microservice Architecture — Ep. 16
As discussed in the previous post, the proper approach of data ownership in microservice architecture is for each service to have its own database hidden behind a network API.
While this approach solves independent service development and deployment, it introduces strong downsides:
- Accessing foreign data now requires a network call, typically adding 100 – 200 ms of latency.
- Cross-service joins become inefficient: data must be fetched from multiple services, normalized, and only then combined.
- Cross-service transactions get much more complicated — some services may use storage technologies that don’t support transactions at all!
How to cope with these problems?
- Cache foreign data locally to reduce network calls. This improve latency, but introduces tough questions: what to cache and when to invalidate.
- For complex joins, create a dedicated microservice that gathers data on schedule and stores join results in its own database (a benefit here is that it can keep and provide results of previous computations).
- For cross-service transactions, use sagas. Each service defines forward and compensating actions, coordinated either via orchestration (by a separate coordinator service) or choreography (event-driven coordination configured for each participating microservice).
These solutions are not simple. The “one database per service” principle trades local simplicity for system-level complexity — but it remains the most scalable and sustainable approach for microservice architectures.