Hi, I’m Stanislav Yablonskiy, Lead Server Developer at Pixonic (MY.GAMES). And today, let’s discuss microservices.
Microservices are an approach to software development (primarily backend development) where functionality is broken down into the smallest possible components, each of which operates independently. Each such component has its own API. It may have its own database and can be written in its own programming language. They communicate over the network.
Microservices are very popular nowadays, but their use introduces significant overhead in terms of network, memory, and CPU.
Every call turns into the need for serialization, sending, and receiving data over the network. In addition, it’s no longer possible to use classic database transactions, which leads to either distributed transactions or eventual consistency.
Distributed transactions are slow and expensive, while eventual consistency means that the results of operations may not appear immediately, and data may temporarily be inconsistent.
Using microservices forces developers to write more code in each individual service due to the difficulties of accessing already written logic from other services. Sometimes, it’s hard to reuse existing code, or you might not even know it exists — since other people may be working on a different project. Let’s talk more about the overheads.
Microservices’ Overheads
Debug Overhead
Debugging becomes much more difficult with microservices. A regular debugger is almost useless in such conditions since you can’t debug all services at once. Without a properly set up system of logging, tracing, and metrics, debugging is nearly impossible until the problem is localized.
This means you need a special environment where not only the service being debugged is running, but also all its dependencies (other services, databases, queues, etc.).
HTTP Overhead
The HTTP protocol has a lot of built-in functionality. It supports various routes, parameter-passing methods, response codes, and is supported by many ready-to-use services (including proxies). But it’s not lightweight — it forces every service to implement a lot of not-so-efficient code to parse and generate paths, headers, and so on.
Protobuf Overhead
Serialization for network communication and deserialization upon receiving messages are required.
When using protobuf for message exchange, you need to:
- create objects,
- convert them to byte arrays,
- and immediately discard them after use.
This creates a lot of extra work for the garbage collector or the dynamic memory manager.
Network Overhead
Transmitting data over the network increases service response time. It increases memory and CPU consumption, even if microservices are running on the same host.
Memory Overhead
Sending and receiving messages requires maintaining additional data structures, using separate threads, and synchronizing them. Each separate process, especially one running in a container, consumes a significant amount of memory just by existing.
CPU Overhead
Naturally, all this inter-process and inter-container communication requires computing resources.
Database Overhead
Normal transactions are impossible when operations span multiple microservices. Distributed transactions are much slower and require complex — often manual — coordination. This increases the time cost both for development and for executing such operations.
Network Disk Overhead
Microservice containers are often run on network-mounted disks. This increases latency, reduces performance (IOPS), and makes it unpredictable.
Project Borders Overhead
Designing and developing microservices brings difficulties in evolving and refactoring a project.
- It’s not easy to change the responsibility zone of a service.
- You can’t just rename or delete something. You can’t simply move code from one service to another.
This usually requires:
- a lot of time and effort,
- several API versions,
- and complex migrations before functionality can be redistributed between services.
In addition, if you want to update or replace a library, you’ll need to do it across all projects, not just one.
Infrastructure Overhead
You can’t just “do microservices.” You’ll need infrastructure — no, INFRASTRUCTURE:
- containers (each containing copies of shared libraries),
- Kubernetes,
- cloud services,
- queues (RabbitMQ, Kafka),
- configuration sync tools (Zookeeper, Etcd, Consul), and so on.
All this requires massive resources from both machines and people.
Independent Deploy Overhead
Supporting independent deployments means:
- each service must be deployable separately,
- each must have its own CI/CD pipeline,
- and the hardest part — API versioning.
Each service will have to support multiple API versions simultaneously. And the callers will have to track these versions and update their calls in time.
Distributed Ball of Mud
There is a near-100% chance that you won’t get your service boundaries right from the beginning. Instead of clean microservices, you’ll end up with a distributed ball of mud — where functionality is poorly distributed, external calls trigger entire chains of internal service calls, and the whole thing is terribly slow.
Is the Monolith Really That Scary?
Modular Monoliths
Modular monoliths allow you to avoid most of the microservice overhead — while still providing separation that can be used later if necessary.
This approach involves writing the application (primarily the backend) as a single service split into individual modules with:
- clearly defined boundaries, and
- minimal interdependencies.
This makes it possible to split them into services if scaling really requires it.
Wait, You Can Do That?
Many benefits attributed to microservice architecture can be achieved in a monolith:
- Modularity can be implemented with language features: classes, namespaces, projects, and assemblies;
- Multiple databases — possible, if truly needed;
- Multiple languages — also possible, for example, combining C/C++/C#/Java with scripting languages like JavaScript, Python, or Erlang for higher-level development;
- Interop — many platforms support calling C/C++ from Java, C#, Python, JavaScript, or Erlang;
- Message queues — just use the appropriate data structure.
And when you want to debug — one keypress, and the whole application is at your fingertips.
Actor Frameworks
Actor frameworks allow you to build microservices — without the microservices.
All logic is split into classes (actors) that communicate only via a message bus (queues).
These actors can:
- exist within a single process, or
- be distributed across multiple processes.
This way, you get the microservice programming model, but most infrastructure is handled by the framework itself.
Conclusion
Architecture should be chosen based on:
- project requirements,
- available resources,
- and team expertise.
Microservices are not a silver bullet. They’re useful for huge projects and teams — but the monolith is not obsolete and is not technical debt by default.
What matters most is the balance between flexibility and complexity, scalability and maintainability — so that the system you build is effective and sustainable.