By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Ideal Micro-Frontends Platform
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > The Ideal Micro-Frontends Platform
News

The Ideal Micro-Frontends Platform

News Room
Last updated: 2026/02/16 at 11:17 AM
News Room Published 16 February 2026
Share
The Ideal Micro-Frontends Platform
SHARE

Transcript

Luca Mezzalira: Probably few of you remember 2005 when the term microservice started to be part of our life. How many of you remember 2005 with the term microservice? Then it started to become more mainstream. There are a lot of discussions still now about if microservices are good or bad, whatever. Then, 10 years later, when we started to see the benefit of microservices in terms of what they can give you to the sociotechnical system that you work on, there was a movement towards, what can we do in other layers? In 2015, or better, 2016, Thoughtworks coined the word micro-frontends. Because we realized the fact that in the past, we had quite a lot of challenges to maintain large code bases on the frontend, like it was happening on the backend.

More or less at the same time, 2016, 2017, the term data mesh started to become popular. If you think about that, now theoretically, you can have a team that is responsible for exposing the API for the data, real-time data for your data team, microservices, and micro-frontend on the frontend. A team becomes completely autonomous. That’s the goal of what we are trying to do. Although there are some challenges that are raised on the frontend side. Why? Because I would say usually on the backend, we have a more structured way to do things. On the frontend, the first thing that you think about is, ok, so I have this view. Let me split it in components. Those are micro-frontends because now I can load them at runtime. Unfortunately, not quite. Very often, people think that micro-frontends are basically just components that are loaded at runtime. They are way more complex than that. I will walk you through that.

Who am I? My name is Luca. I’m a Principal Serverless Specialist at AWS. Before joining AWS, I was VP of Architecture for a company called DAZN, where I mastered micro-frontends. I wrote a book about that for O’Reilly. I’m an international speaker. I was lucky enough to talk from Japan to Silicon Valley and everything in between.

Micro-Frontends – Evolution

Let’s start with the definition, just to give you a bit of context. In 2016, I was running this massive product. It was called DAZN. It’s a live sports streaming platform. Think about Netflix, but for live sports. We grew so fast that we had 550 people spread across Europe in the tech department only in just 18 months. It was quite fast. I had to figure out the idea of expressing certain architecture characteristics for our frontend. Because the vast majority of our users were consuming the application on living room devices. If you’re thinking about Netflix, majority of the time, maybe you are watching on your Samsung TV or your LG TV, Apple TV, whatever. That was the same for us. In order to handle this fragmentation of devices, I had to figure out a way that teams can work in parallel, because we handled more or less like five or six different programming languages for covering all the devices that we needed. Because in more or less two years, we became a global platform.

Back in the days, I walk in this room with engineers, I was saying, “Listen, guys, I have this idea of taking the microservices concept that we are applying on the backend into the frontend”. Imagine how they reacted when I started to say, what does it mean? We are independent, we can do whatever we want. Someone can pick React, someone else can pick Angular. I said, “No. Let’s create some boundaries here”. I coined this definition that basically since then became the definition of micro-frontends. Micro-frontends are a technical representation of business subdomain. They allow independent implementation with the same or different technology. They minimize the code shared, and are owned by a single team.

During this talk, I’m going to show you all these different aspects in practice, so what it means. Let’s start from the first one, so what they really are, micro-frontends. Every time that I talk about the representation of business subdomain with the backend people, immediately their minds start to go towards domain-driven design. With frontend, I can guarantee you it doesn’t happen. Domain-driven design for me is a key for designing as an architect every system because it helps me to understand the boundaries of my system. There are a lot of mechanisms like event storming, domain story mapping, and many others that helps you to understand how your sociotechnical system works.

The other thing that we want to achieve in a distributed system is independence. We want to reduce external dependency, reduce, not eliminate external dependencies. That’s the key word. Let’s try to understand what’s the difference between a component and micro-frontend. The vast majority of the time, when you think about a component, let’s pick a button. The first thing that you do is, ok, so we have this nice component, and I enable you to set a label. I expose the property for the label. Then your button becomes very popular in the company, and now you have also the possibility to set even an icon. If there is the icon, I will show it. Otherwise, I will hide it. Then you start to add other things, like someone can change the border color. You have different rollover animation. You can have auto-size dimension because now you are using your button with different languages, so you need to set the right size based on the language. Finally, disabled by default because it’s used also on forms.

What happens when you do a component? All these things, all this knowledge is basically shared with the container of the component. Now the container of the component knows how to use this component in a specific context. That means that we are optimizing for reusability because I want to have consistency on the UI. Don’t take my word for that. This one is Neal Ford, the director of Thoughtworks. Reuse is a form of coupling. If you think about that, when you reuse a component, you want to change. Now you need to coordinate the change across multiple parts, and it might become daunting in a distributed system. Instead, a micro-frontend has to be thought differently.

As I said, it’s a representation of business subdomain, so therefore we need to think about encapsulating this state. Very often this is not happening because we think that the state is not related to the UI. There are even frameworks nowadays that are suggesting you co-locate certain things and not co-locate others. The state is part of the domain. It’s how, basically in this case, the micro-frontend works. It has to be co-located inside. It shouldn’t be shared. It should be context-aware. We said before that the component was leaking the domain or the context to the container. Here the container of the micro-frontend shouldn’t know anything about the domain. It should just enable loading a micro-frontend. That’s it. It defines clear input and output because micro-frontends need to communicate together. Imagine that you have in the same view multiple chunks of the system that are owned by different teams. These teams might need to communicate.

Finally, are less extensible because here what we’re trying to achieve is not reusability. We’re trying to have independent teams. We want to optimize for fast flow, so having the possibility to deploy multiple times per day, and that’s how you want to optimize the system.

In the last few years, I started to create a bunch of heuristics. One of those is testing the micro-frontend’s boundaries, because very often I have these questions from customers in AWS. How can I know how micro is micro? First of all, you reduce the API exposed to the container. Usually, a micro-frontend needs maybe an ID, an injection of an event emitter, not much more than that. Usually, it’s context-aware. The moment that you need to coordinate the deployment of micro-frontend with multiple other micro-frontends, then probably the domain leaked. You need to review the boundaries of your system. You don’t optimize for reusability at all costs. There are certain scenarios, like if you think about a SaaS product or multi-tenancy products, where you want to reuse a certain part of the system, that, yes, micro-frontends are used for that. I have companies in Canada that I help to design the system in a way that the onboarding part is always the same for every customer, but there are custom modules that are related for specific customers.

Then there is also that the micro-frontends are usually more coarse-grained than components. Vast majority of the time, when you have six, seven micro-frontends in the same view, if you decide to go in that path, it’s more than enough. When you start to find 20 micro-frontends in the same view, they are not micro-frontends, they are just components. Then remember that the boundaries, like domain-driven design teaches us, are not forever. Your company is changing. The business is changing. Think about the pandemic. We drifted in a matter of weeks the way we were operating. That is the only concept in software, we need to embrace change, hence these changes have to be reflected in your architecture. They cannot be thought in a vacuum, and that’s it.

Organizational Benefits

There are obviously, in every distributed system, organizational benefits. Because if you’re thinking that microservices is just a technical choice, you’re wrong, because they’re not. They are mainly an organizational choice. If you go to amazon.com today, there are plenty of microservices. Do you care that there are microservices, or do you care to buy the next vacuum cleaner, or MacBook, whatever it is? The benefits are similar to microservices. We have the possibility to do incremental upgrades. This is mentioned several times when a team is embracing micro-frontends. Because the vast majority of the time, people are doing still big bang deployments. We can decentralize, and that is not only a technology decentralization. I just assign certain decisions to the teams so teams can do whatever they want. It’s also an organizational one.

Therefore, it means that we need to empower developers to take action. There is a reduction of cognitive load. That is very important. When I was in the previous company, I was onboarding more or less a team of developers every other week. I started to basically track how fast this team were capable to deploy in production. With micro-frontends, because they just need to focus on a few principles that I was able to explain in half a day, and then they know the stack that we were using, and then they just need to develop the domain part.

In a matter of 30 days, so a month, they were able to ship code in production. That is extremely good, especially when you think of the scale of the organization. Last but not least, remember that distributed systems are meant for scaling organizations, not only technology. Probably, talking about organization, you have heard of this law, the Conway’s Law. Have you heard about the Conway’s Law? The idea is simple. Usually, when you are designing a system, you design based on the way your organization is structured. When you hear very often, microservices are evil, or monoliths are evil, it’s mainly because probably they didn’t understand their context. Probably they went to a conference, they heard that microservices are great, and they started to embrace in a company with two teams that are working together.

The complexity started to rise at the level that then they started to struggle, create friction. Because one of the cool things of architecture, that friction that is generated inside the trenches is a gift. It’s not negative. It’s basically telling you something, and you need to understand where to fix that. The interesting part of this law, I added under the name of Melvin Conway, the date when this idea was coined, 1967. It’s not new. It’s something that we didn’t embrace at the beginning. It became the Conway’s Law in the late ’70s. If you start to think about that, it took a while, almost 15 years, before it became reality.

When I think about that, you immediately start to see, let’s say, I was thinking about my past. I was working in a company as a principal engineer or a senior engineer when you have this kind of situation here. Tech leadership over there, they were deciding the stack. They were deciding the patterns. They were deciding a bunch of stuff. Then suddenly they say, guys, so this is how it works, and you just need to implement. The team were growing organically. There wasn’t anyone looking into the communication of the teams. They just say, so you have to ship this stuff. Ship it. It doesn’t matter.

Then you have the external dependencies, friction, and stuff like that. No one was looking into that. The problem was technology. The code is spaghetti or whatever it is. When you think about distributed systems, we need to change our mindset. We said decentralization. We need to change also the way we structure our teams. How many of you have heard about team topologies? It’s a study around how you should structure teams and be intentional on how they communicate together. In this example, for instance, we have multiple teams that are working on distributed systems. It has catalog team, onboarding team, personalization team. Those are stream-aligned teams. They are working on why your system exists, the features that are going to be shipped to your users.

Then you have the data science team. You want to understand how to personalize certain things in the catalog, for instance. The data science team exposes a way to onboard data in a simple way that works for all the teams. The teams don’t care when they ship the data into an S3 bucket or whatever it is, how the things will evolve. They just care about the touching points because the complicated subsystem, that is in this case the data science teams, is facilitating the retrieval of this information. Then you have platform as a service. In this case, you have a platform team that exists also on the frontend side that enables you to create some tools that are used and are federated as a service. I use Kubernetes as a service, for instance.

Finally, you have enabling team. Think about security, developer experience, architecture. The vast majority of the time, a security team is very tiny compared to the amount of developers that you have. Therefore, they need to create connection across the organization in order to enforce certain guardrails on the security side. The same for architecture, the same for developer experience. When you start to look into a system in this way, then you start to see that friction might not arise that easily because you have a way to look at the system. You have a map on how to design a system in a way that enables you to have distributed systems.

Platform Team Responsibilities

As I said before, platform engineering doesn’t exist only for backend. It’s also for frontend. It’s different, and that’s why we are talking about first at this layer. There are a few responsibilities. Some companies are calling platform team. Some others are calling core team, whatever they want to call it. Tiger team, I heard in a few companies. The idea is that you have, at the beginning, when you work on micro-frontends, a bunch of people that are looking into how to build the system, migration, and other stuff, because the platform teams define the foundation of a distributed system, end-to-end, potentially. After five years working with micro-frontends, so from 2015 to 2020, I started to recognize patterns with a lot of companies, some inside my company and some others with the company that I helped in my spare time.

Now, basically, in the last five years at Amazon, I helped roughly 100 teams worldwide, internal and external, migrate into micro-frontends. Those are even more crystal clear to me. I coined this idea of decisions framework. There are four decisions that you need to make for every micro-frontends implementation at the beginning of the journey. That doesn’t mean you cannot change them, but at the same time are foundational. First of all, you need to identify how a micro-frontend looks like. You need to compose. How you want to compose is critical because it sometimes is forcing you to choose a specific technology. How you want to route between views. Finally, how to communicate. Those four decisions, will be there every single implementation that you do.

The first thing is that you can have multiple micro-frontends in the same view or you can have a view that is representing one micro-frontend, or multiple views that are representing micro-frontends. I usually call horizontal split on the left and vertical split on the right. The vast majority of the teams that are starting with micro-frontends, they start with a vertical split. There is a strong reason why I recommend that as well. Because when you start and say, I have the onboarding experience, that is sign in, sign up, remember email, remember password, payment, whatever, and I wanted to split it, the code base is unique. Easy. I can split it. I can assign a certain part of this path to another team, easy.

The other way around is not true. Because I was a developer, and probably many of you are, and if you have a code base that you don’t understand or is taking some patterns that you are against, you are going to rewrite that. That is an investment that is wasted. While, if you start to be more coarse-grained, then moving to a more granular approach, the horizontal split is easy. Six months further down the line, you understand better your system than what you understand today. Because you started to see things that are working, assumptions that were true, others that weren’t, and so on. Therefore, the vast majority of the time I recommend, start more coarse-grained, and then you move forward. Composition side, there are two ways. You can do client-side rendering, or you can do server-side rendering. I don’t mention edge-side rendering, because the vast majority of the time, the problem is data, and data has gravity. If you think about that, if I deploy server-side rendering on the edge, but you are working, operating in a single region, then you are not gaining much. You are just having more friction on the developer experience for deploying on the edge. Therefore, pay attention to that.

The other thing is, routing is very simple. Routing is happening. The blue rectangle is called application shell, and it’s basically the container under the micro-frontend. It’s responsible for composition and routing. It works for client-side rendering on the application shell, the routing part, and server-side rendering, everything happens on the server-side. There is a caveat, though. Very often, you can pair the client-side routing with edge-side routing. Perfect for migration. If you do a strangler pattern, that’s the way to do that. You just put it on the edge. Every request that is coming into this compute part on the edge will be interpreted and decide where to route, old monolith or new micro-frontends. Iteratively, you can handle it very easily.

Finally, communication. There are two types. If you are in the horizontal split, so multiple micro-frontends together, that is, large systems very often operate, or large teams are often operating in this way. You can use different ways, event emitter, custom events, or reactive streams. I’m not talking about a global state here, because we said before that reusability is a form of coupling. If I have multiple teams working on the same state, and I change the way specific things work, everyone has to test it. Coupling, we don’t want that. In reality, when you use a pub/sub pattern, like an event emitter, custom events, reactive streams, usually you end up in a more maintainable code base.

On these three, because this is usually the question that people ask me, is event emitter my favorite? Because reactive streams, unless you are familiar with reactive programming, and I wrote a book on reactive programming for frontend, and it’s not a walk in the park for many people, is not that easy. You have way more stuff that you need for communicating between micro-frontends. Because reactive streams is thought about observable, for owned data. Here we are talking about events. There are some teams that are using, especially with Angular. Custom events are bounded or tightly coupled with the DOM structure. You might have some undesired bugs that are happening while you implement that. Event emitter, instead, is the easiest one. You just subscribe to a specific event, and when someone is emitting an event, he will be notified.

If you’re interested, great. If you’re not interested, you discard that event. Simple and easy. Both splits, so horizontal and vertical, requires storing ephemeral data or more important data, like the session token. Therefore, for session token, cookies are the best option. Sometimes I have seen customers using local session storage, different security posture, but it happens. Or otherwise, even web workers are a good way to store session tokens. While for ephemeral data, imagine that you’re in a catalog, you select an element, and you pass this information, the product ID to the next team, you can pass through query string. You just say, by query string, I pass the ID with this name, and you will take over. You load whatever is needed, and that’s it. Quite straightforward over there. The application shell, there are a few considerations. It should be as vanilla as possible. If you can avoid frameworks, better. It should be context unaware.

If you find yourself needing to deploy a micro-frontend alongside an application shell, probably you’re not doing a great job, because that means the domain leaked into the application shell. Application shell should be something that the rate of change is very low when it’s stable. Code volatility matters here. Expose the initial configuration and not much more. It’s responsible for composition and routing. These are the main top needs for an application shell. Anything else shouldn’t be in the application shell.

Which Library/Framework Should I Use?

At this stage, if the platform team does its job and selects whatever is needed, then the question is, which library and framework? The cool part. As a developer, I was one of them. Which one should I use? The vast majority of the discussion with the teams are more or less like that, where there are, “There is this framework. That framework is super cool. This one, I’ve tried this. I love it”. This is basically me when I hear about all these frameworks or whatever. The answer is it depends, and not because I’m an architect, mainly because it’s reality. There is no right or wrong. I’ll give you a couple of examples. This one is, for instance, a Next.js application created by Formula One, where apart from the different complexity, the interesting part is what’s happening in different regions. As you can see, there are different application load balancers that are exposing clusters of containers. There is the article part, the video part, the event tracker, and so on. Formula One migrated iteratively towards the micro-frontends application in just a month. They started to, obviously, then integrate with more stuff. The cool thing is that shows that it’s doable.

If you try right now, the metrics of the Core Web Vitals, that is usually one of the metrics that you use for understanding how good your system works, especially for server-side rendering one, are extremely good compared to before. They increase even by 30% certain metrics, because they started to have different levels of caching possible based on this split. They can say, for instance, the news, I need to cache for not much time because maybe there is breaking news, so otherwise it’s a problem. The archive of all the different Grand Prix that happened, I can cache for a month and I forget about that. No strain towards the origin, and you start to work pretty well on that. There are other approaches that you can use with Next.js, for instance. It’s still the same framework. You can use, for instance, a CDN that has a domain shell. You say just catalog. Then you load the page and you consume an API. Or you can have another micro-frontend that is loaded inside the page.

As you can see, there are different ways that you can do that and different implementations. The first one is, for instance, using module federation. The second one could be as simple as injecting HTML string inside a Next.js application, that is happening nowadays in form of integrating with Haskell that I never remember. Then you set inner HTML property in React. You can take an ISG or server-side render chunk of HTML and inject inside the system, done.

Now, Astro, for instance, that can use React or any other framework, uses different ways. They have the idea of server islands that, again, is another form of micro-frontends. You can have, for instance, in this hypothetical dashboard, different micro-frontends that are composed in different areas. The API latency is one. Logs is another one. Core Web Vitals is another one. Everything is rendered independently. The idea is very simple in this case. Every time that there is the user browser that is asking for the dashboard, the application shell in the server is parsing the route to understand what it needs, then call a different server island. They are doing the server-side rendering and inject the final HTML inside the application shell. What does it mean? That if, for instance, I want to update the logs every minute, while I want to have the Core Web Vitals every week, that could be a process that is completely independent, handled by different teams, designed in the way that they want, implemented in the way that they want, with the libraries that they want, as long as they respect certain guardrails.

Only the part that is refreshed will be hydrated inside the browser. If you think about that, right now, a vast majority of the time, you hydrate or re-render the entire page every time. Think that you can maybe just update a tiny portion of the system, and it’s completely independent, handled by another team, internal or external. There are plenty of companies doing that. The important things overall, just to show you the importance of this stuff, is that you just need to understand what you want to express inside your architecture, because there is no right or wrong. All these ways, and those are just a subset of ways that you can do server-side rendering and micro-frontends, are different approaches for solving the same problem. I want to scale my teams, I want to use a specific technology, I want to have a different level of caching or caching granularity, as I call it, and so on. There are a few things that the micro-frontend enables, but you need to understand what you want to express inside your system, how your organization is structured, how you want the communication between teams happening, in order to decide which one works better for your context.

Developer Experience

That leads to the last part of the platform part, is the developer experience. Usually, one of the questions I have the vast majority of the time is, Luca, how can I handle the update of dependencies? I want to update my design system across all the micro-frontends. It’s a distributed system now, so every team has its own code base. How can I do that? Or, how can I make sure the performance of my application, that is owned by multiple teams, that are deployed independently by multiple teams, can cope up? Usually, you use guardrails. You can set up a performance budget for your micro-frontends, but you just need to use a Git hook every time that there is a request, you just build your micro-frontend and check the size of the micro-frontend and associate the threshold for how big the final artifact will be. If the micro-frontend exceeds the threshold, you just stop it, and it doesn’t even merge the pull request when it’s needed. Because you create a healthy developer experience that you pass this information early on to the developers and not when it’s already deployed in production. If you want to update the shared libraries, you can use Dependabot. Classic system.

If you’re using GitHub, GitLab, anything. Those can be configured very easily with Dependabot. Dependabot basically does a scan every night about your project, and then looks into the package.json and open a PR with bumping up the libraries that are needed to be bumping up, not only the one in npm but also in your private npm. You can automate that. You can start to look into the security posture. For instance, you can use ts-arch that is a porting of ArchUnit in java that enables you to say, ok, so this micro-frontend can only retrieve code from the shared library, not from any other micro-frontend, so you avoid having singular dependencies or you avoid having external dependencies that are not wanted in order to maintain independent artifact.

Finally, you can add any other architecture characteristics that you want. Another thing that I started to see very often is Backstage. Have you ever heard about Backstage? Backstage is an open-source project created by Spotify that enables you to have a unique single pane of glass where you can consume your APIs, find information about the system, and also templates. Many companies are starting to use that also for micro-frontends. You have a place where you don’t have to look into a million GitHub repositories. You just go there, download, and start to write code.

Deployment

There is a problem that every mature company starts to find with micro-frontends, that is this one, deployment. The deployment is very simple. I have a few mantras on deployment, for me it has to be incremental, frequent, and independent. Those are the three characteristics I’m looking at when I work with micro-frontends. With a bunch of framework maintainers, I worked for 10 months in order to create a discovery service like you have for microservices. We came up with this idea of creating a JSON schema that contains the possibility to express what I need to have in micro-frontends. I just have a micro-frontends URL for loading the specific JavaScript file, a fallback if you use with multi-CDN. You have metadata integrity, and you have extras where you can fit whatever information is needed. Some customers are using for optimizing the performance of their micro-frontends, some others are using for saying, this micro-frontend is for admin only or for user only, and so on.

The cool thing is that because it’s an object, every micro-frontend is identified as an object, I can start to have also the deployment strategy of my micro-frontends because we load the runtime, the micro-frontend now. It’s not like before, we ship all the code like in a single-page application into the browser, or we server-side render everything and then we ship the HTML to the browser. Now we are basically loading an empty shell and we are dynamically loading what is the UI there. This frontend discovery schema is available here and is completely open source, and there are already quite a few customers using that.

Then I wasn’t too happy and I said, listen, but what does it mean really to load this stuff? This is the part of what we call in AWS the undifferentiated heavy lifting. We have a micro-frontend discovery service that is called every time with an application shell, either on the edge or client-side or server-side, that is consuming data from a database where we know what kinds of micro-frontends are available, and then I can apply, because I’m on the network side, also a few constructs like canary release and blue-green deployment. Because at the end of the day, it doesn’t change the performance of my application.

If I have a request that is coming to a bit of compute or a service that is telling you, you are in bucket A, and therefore you see micro-frontends version 1.4, that is the latest one, all the other users will see the previous version that is 1.1, for instance, you will be done. It’s extremely easy, so we can start to de-risk the deployment of micro-frontends. Therefore, with a bunch of colleagues, we worked in this open-source project called Frontend Discovery Service that is a serverless approach for deploying your micro-frontends. It works with whatever CI/CD you have. The cool thing is that with one click, you can deploy in your AWS account. If you are in another cloud provider, you can take inspiration from the code. There are quite a lot of customers currently using that.

Value-Stream Responsibilities

Let’s move into the value-stream responsibilities. Let’s talk about what the team that are responsible for building features should do. First of all, the first pain point or the part that I read millions of times about micro-frontends is that, they are used because I can work with Angular, React, and Vue all together in the same application. Let me ask you a question, how many of you have used three different UI frameworks in a single-page application? Is there a specific reason why you did that? Because sometimes it happens that you need to, but you don’t optimize for. Because you are basically bringing only different code bases and approaches, but you have a different culture, different tools inside your system. It’s complicated.

Therefore, Thoughtworks in 2017, one year after micro-frontends were out, they started to coin the antipattern called micro-frontends anarchy. When you have the same system working with multiple frameworks, that is complicated. Sometimes you need to. There are certain situations that you need to. For instance, when you are transitioning from an old framework to a new one or when you are doing an acquisition. Maybe your company acquires another company and you need to immediately capitalize on the investment. Therefore, you want to have two frameworks while you are working behind the scenes in order to optimize that. You shouldn’t optimize for this because you are creating more complexity than needed.

The other thing is code sharing. I can guarantee you that I was literally crucified when I started to say, you don’t have to share code like you were doing in a monolithic system with components. The community was quite against that. Although I found a good ally that is Sandi Metz. He was saying that duplication is far cheaper than the wrong abstraction. Because, in reality, what we are aiming for is reducing external dependencies, as we said before. There are certain situations like the design system, you want to have consistency, makes sense. Not for everything. You want to optimize for fast flow. By far, one suggestion I usually do to everything, when you are on the fence, like we abstract or not our code, let’s start with duplication. Let’s wait six months. Let’s see what hurts.

Then, at that point, you have by far more insight on what to abstract and how. Finally, you need to embrace a lean mindset. Because the reality is code can be changed very easily at some point. If you structure your code in a modular fashion, despite duplicating that, replacing it with an abstraction will be pretty fast. As long as you are capable of understanding which part of the system needs that. A quick example. Imagine that you have a logging library, analytics library, design system, and data layer. Which one shouldn’t you abstract?

Participant: Data.

Luca Mezzalira: Data. Good choice. One heuristic that you need to follow is the rate of change. If you think about the logging library, analytics, design system, the design system, the rate of change can be higher than others. Data layer is specific to the domain. We said that we are building subdomain systems. We are not talking about abstracting that. I have many companies that, for instance, end up having a shared data layer. Then suddenly, surprise, some micro-frontends were consuming other APIs not inside their domains, increasing the strain towards origin and providing a big problem. Because now for just a single field that they need to display, they are basically consuming way more than is needed, while a rationalization of this stuff could be done at the architecture level. You open up a can of worms that you don’t want to look at.

More importantly, the rate of change of the data layer is definitely higher than the others, because when you created the logging library, you are not going to touch every five minutes. The same for analytics and design system. The data layer is likely, because if you’re building a new feature, probably you need a service to consume. If it’s inside my area, I don’t have to retest all the other micro-frontends. I just work with my peers and I start to deploy that specific thing quickly.

The ownership part, I know that it seems not very fancy in 2025 talking about that, but the reality is I need to talk about that because it’s not like I imagined in many companies. For me, a micro-frontend should be coded by a single team, tested by a single team, but also they should know about deployment monitoring. Sometimes for certain teams that I’ve encountered during my journey, it’s shocking because usually they are working like I was working 10 years ago. I write some code, it works, test it, perfect. I throw on the other side of the fence and another team is deploying that. When there is a bug in production, I don’t have eyes and ears. I just know that, when the user clicks that button, it blows up. Then good luck solving that. It’s extremely important you see with the system the observability part, otherwise, it’s not going to work at all. That’s one of the key aspects of micro-frontends.

The last one is, adhere to guardrails. I said before, there are plenty of teams inside a distributed system that needs to have a say on how to shape a specific system to work properly: architecture, security, platform, developer experience, and so on. You cannot go rogue and decide every time how to implement one thing. Those teams are there to support you in order to build your system properly. I was talking with one of the speakers about what they were doing at his company. At the beginning, they basically allowed everyone to pick whatever technology they wanted back in the frontend.

Then, fast forward a few years later, they regret it and they started to create a paved path in order to follow certain rules. The guardrails that I was mentioning. Therefore, it’s extremely important when you think about micro-frontends that you look holistically at these things. You, as a team, you are going to have not only great skills on coding and vibe coding nowadays, but also great skills on thinking about how to operationalize the system, your part of the system, and making sure that it is adhering to the guardrails.

Recap

Obviously, this is not everything that covers how to build micro-frontends, but I hope that by now you have a clarity of what it means to build micro-frontends, that is not definitely selecting the latest version of React, or Angular, or whatever framework you like. That is the easy part. There is way more that we need to think about. Personally, I don’t underestimate coding challenges. When I look into governance, observability, end-to-end testing, security, events management, and performance, are by far the part that every single company that embraces micro-frontends falls short very quickly.

Therefore, if you don’t have those dimensions in your roadmap, then I can guarantee you that you will regret the idea of using micro-frontends because the complexity is not on writing code, anymore. We work in the frontend space for many years saying, now I’m architecting a design. In reality, we were selecting Angular or React and Redux or React, MobX, whatever you choose, where the architectural decisions were made by others for you. We were just implementing certain things, selecting the libraries, and off you go. Here we are going back into architecture and not the easiest one, to be fair. The distributed system, because this is a sociotechnical system, is one of the most complex architectures that you can design. Therefore, it is extremely important that you are taking your role seriously and look holistically into the system and not only on how beautiful your for loop looks like.

Summary

In summary, what we have seen today is we understand what micro-frontends are. We understand that the team ownership matters, and, therefore, the sociotechnical system is front of mind of your team, despite it being a technical team. It doesn’t mean anything that you are writing code that someone else should think about. It’s your ownership that you should think about how, holistically, the system works and the ownership of a specific part of the system. The importance of fast feedback loops. As I described several times, I emphasized the idea of reducing external dependencies for this reason. Because if you reduce external dependencies, you are more in charge to go fast into your path. With micro-frontends, I had teams that moved from 2 deployments per month to 25 per team per day because they have all the safe nets that I described in this talk about canary release, blue-green deployment. They were capable of moving fast without having external dependencies that were slowing them down.

Finally, the decentralization. That is not just a technical decentralization, but also an organizational and also a mindset that you need to embrace. Last thing that I want to share with you. This is a topic that I’ve been doing for 10 years. It’s a topic that I have experience with a lot of customers. If you think about the largest brand in the world, probably I speak with them. The beauty of that is that I’m trying to push even further this stuff. I created a newsletter that is completely free. It’s not any advertising or whatever, but this one is completely free. I’m trying to give you all the insights fortnightly about what is happening in the micro-frontends community because I’m deeply embedded with that. I speak with the framework maintainers regularly. I’m trying to help others to understand the case studies that are happening worldwide because not everyone has this visibility.

Luckily for me, because I’m positioned in this community very well, a lot of companies are coming to me and saying, I read this article. Did you try this, or, have you seen this? I want to share back to the community this privilege that I have because I believe that it’s important that we democratize micro-frontends.

Application Shell Ownership, and Decision-Making

In my experience, who is going to own the application shell and takes the hard decisions at the beginning? There are two patterns that I’ve seen. One is the staff engineers, principal engineers, or senior engineers inside the organization. They group together in a virtual team and they take care of this initial decision. My recommendation, though, is at the beginning creating a tiger team that is responsible for grabbing people from different teams in order to have a good representation of what’s happening in the trenches, and for a certain amount of time working together and then decentralize that. In my previous company, I was doing that at the beginning. We had five people inside with different knowledge and a different level of seniority.

The cool thing is they worked together for eight months. After eight months, everything was stable to a certain degree. Then we split that team in multiple teams and we decentralized the ownership in multiple dev centers because we were spread in multiple places in Europe. Before the pandemic, every single team can go to two, three people in this dev center and ask them how it works or if they would like to change that. Then these people were responsible for taking to the virtual group and working together. Because, as I said, the rate of change at some point will drop on that side. When you have done the first eight months, you create the foundational part, then it just keeps the lights on. You don’t spend eight hours per day, every developer doing that.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Oracle readies AI note-taker for NHS | Computer Weekly Oracle readies AI note-taker for NHS | Computer Weekly
Next Article Spokane startup Blaze Barrier heats up with new funding for quick-deploy wildfire defense system Spokane startup Blaze Barrier heats up with new funding for quick-deploy wildfire defense system
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Best Wi-Fi Extender 2026: The easy way to keep your home connected
Best Wi-Fi Extender 2026: The easy way to keep your home connected
Gadget
Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
News
If you get a puncture, you are less and less likely to change the tire yourself. But the DGT fines you if you don’t carry it in the trunk
If you get a puncture, you are less and less likely to change the tire yourself. But the DGT fines you if you don’t carry it in the trunk
Mobile
A 60-year-old space mystery solved by AI
A 60-year-old space mystery solved by AI
Mobile

You Might also Like

Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
News

Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments

4 Min Read
Why I Stopped Wearing Earbuds After Hearing Loss
News

Why I Stopped Wearing Earbuds After Hearing Loss

10 Min Read
IT budgets may be rising, but inflation is killing value, tech economist warns –  News
News

IT budgets may be rising, but inflation is killing value, tech economist warns – News

9 Min Read
Octopus Energy pours bn to US clean tech market – UKTN
News

Octopus Energy pours $1bn to US clean tech market – UKTN

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?