BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Microfrontends Anti-Patterns: Seven Years in the Trenches

Microfrontends Anti-Patterns: Seven Years in the Trenches

Bookmarks
46:42

Summary

Luca Mezzalira discusses common anti-patterns he has seen in the past seven years of implementing and consulting multiple companies in their journey into the microfrontends architecture.

Bio

Luca Mezzalira is a Principal Solutions Architect at AWS, an international speaker and an author. Over the past 18 years, he’s mastered software architectures from frontend to the cloud, providing the right solution for the context.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Mezzalira: I want to share with you my journey on micro-frontends, moreover, my learnings on the last seven years building solutions that are based on this architecture. I created a global solution that was running on multiple devices, including lower end devices like set-top box and consoles. Also, nowadays, in AWS, I'm helping our customers to design properly their micro-frontends architecture. This is a session that I really wanted to do mainly for providing the experience that is difficult to find in the industry, because it is a brand new architecture and there aren't many people that are fostering for several years.

Background

My name is Luca Mezzalira. I'm a principal solution architect in AWS. I'm an international speaker and an O'Reilly author.

The Journey So Far

Let's start the journey when for the first time I saw micro-frontends. Back in the days in 2015, it wasn't called micro-frontends, it was called microapps. The first implementation I have seen was during a conference where a couple of Zalando engineers were sharing their implementation of micro-frontends, and they open sourced Tailor.js. Tailor.js was a composition layer for micro-frontends for creating server-side rendering micro-frontends. It was part of a larger system, as you can see in this diagram, that basically was assembling multiple HTML fragments and returning the final view to the user. The great thing of this approach was the first implementation at scale that I have seen in a company like Zalando.

In 2016, the things became more official. In fact, we see for the first time the name micro-frontends in the Technology Radar from ThoughtWorks. The Technology Radar is one of the tools that you can read, and is provided open source by ThoughtWorks. You can see what they are trying, what they are assessing, what is the technology that they see working with their customers. Micro-frontends back in November 2016, it was to the assess stage. In 2019, we started to have an adoption of micro-frontends. Early adopters were different. They were companies like Spotify, Starbucks, IKEA, and many others. Obviously, from 2016 to 2019, many companies were trying to create micro-frontends architecture, and trying to understand how to apply at scale this concept. There were many companies and a lot of knowledge around that.

In 2020, we finally start to see micro-frontends as a trend. In fact, at InfoQ in their architecture and design report, they put micro-frontends in the innovator's part of the bell curve. It's very interesting because as you can see now in recent years, you will see that micro-frontends now are at the early adopter stage. That's great because it means that more organizations are implementing this architecture pattern, and is extremely interested. 2021, I call the discovery year. We started to have a strong adoption of micro-frontends, companies like PayPal or American Express are leveraging this architecture pattern, but there are many others. The interesting part is that now that there is a strong adoption it means that we are going to learn a lot. We are going to learn, what works, what doesn't work. That's also the purpose of this talk.

Micro-Frontends Benefits

The micro-frontends benefit that we have seen so far, is that definitely you can have incremental upgrades, so you can develop a portion of your frontend application, deploy in production, and start to gain value out of it. Also, gather information on how the things are working. If your CI/CD is working properly, if it's fast enough. How your code is working. If there are benefits compared to the legacy application. There is a lot of value on that. We understood that micro-frontends leverage the concept of company decentralization or organization decentralization. What it means is basically you are creating some boundaries as a tech leadership for having the teams striving inside that. You maybe define, what is the tool for defining the CI? What are the languages that the teams can use? In reality, then the teams will be in charge of running the show. They need to write the code, to build the code, and also run in production, and maintain it. That's a great novel way for developers to handle this because at scale is something that often we haven't seen applied properly. The concept of DevOps, the concept of what we have experienced so far with microservices, it's applicable also with micro-frontends in the frontend world.

The other cool thing of micro-frontends is that there is a reduction of cognitive load. The team doesn't have to know everything inside-out of the application. They just need to know very well their portion, their domain, as it's called. We know that micro-frontends basically is assigned to a specific team, and the team should master that code. That is great because it means that you don't have, in a large organization or a large or complex application, to know everything. You just need to know your part very well, and being able to support that code base properly. Finally, another benefit that we have seen, is that the micro-frontends approach is not only a technology solution, it's also an organizational one. The fact that you can break apart in a distributed fashion, your application, it allows you to spin up new teams very quickly, and also to create some team that can work independently, and work faster because they don't have to coordinate every time with the tech leader that is maybe based in another office, or in another country, in another time zone. That's great because it means that you have autonomy, and people can move faster.

Yin and Yang (Micro-Frontends and Components)

One thing that I really liked since joining AWS is this sentence from our CEO, there is no compression algorithm for experience. That's so true. What I want to do is leveraging these concepts of experience and trying to compress instead my experience, so you can understand what you could do and what you couldn't do with micro-frontends best practices, basically. Let's start with the first antipattern, I call it the Yin and Yang, or the difference between micro-frontends and components. Those two things are not mutually exclusive, you can have them working together very well, but you need to understand which one is which.

Let's start from the beginning. Usually, imagine that you have something very simple like a button. In this button, we create and we want to have a label preference. We generate our component that is a button that has the possibility to set a label. Then there is a new requirement coming, you want also to have an icon and this icon could be different. It could be that you might want the icon or you want to hide the icon if I don't pass a parameter for that. There are several options there. Then a new requirement comes in and you want to change the border color of the button, so a new property has to be exposed to this button. Then you want to have a different rollover animation, because this button for a specific portal that you are developing, has to have a different rollover animation. Then you are introducing localization, and therefore you want to have an auto size of the button. Finally, you want to have a new feature, because after a while that you're using this button, you discover that it is using a form but it has to be disabled by default, till the user field or the fields inside the form.

There are a few things that are implemented in the button, but driven by someone else, so the container of the button. The context of the button is driving the component. If we look at the micro-frontends instead, we are talking about a technical representation of a business subdomain, usually a button, it doesn't represent a subdomain, it's more a technical solution. It allows independent implementation with same or different technology. Usually, we minimize the code share across subdomains, and the ownership is only one thing. If you focus on the two key characteristics of micro-frontends, so technical representation of a business subdomain and independent implementation, we immediately see that a button in this case this component doesn't fit inside the definition. What it means is we are in front of a component that is extensible and in this case the container or the context is driving how the component is behaving. That's the key difference between micro-frontends and components. In a micro-frontend, instead, what we're good at is independency. A micro-frontend knows how to behave inside-out. It doesn't mean that we are not passing certain information from the context that is behaving, but those are limited, are really small. For instance, we could pass an ID for saying this is the ID of the element that you need to render, and the micro-frontends knows which API to call. It knows how to behave, how to render itself.

Another thing is, it is domain aware. It knows exactly what it has to do. It doesn't have someone else or the context to provide all those information to behave in a certain way. It is tightly coupled with the concept of independent deployment, because now we can deploy our micro-frontend independently, we don't have to coordinate a deployment across multiple teams. A micro-frontend defines its input and output. We know usually, if we have a micro-frontend working alongside other micro-frontends in the same view, what events it is looking for, and what events it is triggering, or emitting. That is great because that means we can independently stitch together multiple micro-frontends without the need of coordinating too much with other teams. When we want to, for instance, redeploy a micro-frontend, we don't have to coordinate with other teams, as long as the events that are in input and output are not changing. If we add a new event, it's not going to change the result for the other micro-frontend that's living in the same view. That's great, because it's building up on the concept of independence. Finally, it is not extensible. We are not moving or controlling the micro-frontend from the context. The micro-frontend is a self-contained solution. That's exactly what we are looking for with micro-frontends.

When you start to have implementation of micro-frontends that contains a very granular approach where you have the header, the footer, you have the image that are all micro-frontends, probably they're not micro-frontends. Probably you're talking about components that you want to lazy load. That is a different solution, so we need really to understand that when we have components or where we are dealing with micro-frontends, that's the first key thing.

Hydra of Lerna - (Multi-Frameworks Approach)

Second antipattern I have seen so far is the Hydra of Lerna or the multi-frameworks approach. That is one thing that I read several times over on socials. We use micro-frontends, because we want to use multiple frameworks. The question from me is, how often have you seen multiple frameworks in a single page application? Probably the answer is none, or not that often. The reason behind is performance. We know that using multiple UI frameworks in the same single page application could cause more than one problem. That is also true with micro-frontends, we don't have to penalize our users because we are using a specific architecture, in this case, a distributed architecture. It is true, you can do that, but you don't have to.

The interesting thing is that there are certain situations where a multi-framework approach makes sense. For instance, when you're dealing with legacy systems. We said before that we create independent artifacts with micro-frontends, and therefore you want to iteratively test your assumption and your code in production as soon as possible. What you could do potentially is slowly but steadily replacing a legacy system instead of going in stealth mode for 12 months. Then suddenly you do a big-bang deployment replacing the old application. You can take a portion, a domain of your application, building your micro-frontends, deploy in production, and start to learn from that. In that case, you could end up having a legacy system living alongside the micro-frontends system. Therefore, it means that you can have multiple frameworks working together.

Another solution is when we are migrating from a UI framework to another one. For instance, imagine you're using an old version of Angular and you want to migrate to the new one. Once again, you can iteratively replace portion of the application and then slowly but steady, migrating the entire application to a new version of Angular. Finally, when there are acquisition of new companies and you want to immediately have, or as soon as possible have value from that acquisition, you can work alongside multiple applications under the same umbrella, so under the same website.

The Swiss Army Knife - (Write Programs That Do One Thing and Do it Well)

The third antipattern is the Swiss Army knife, or trying to write programs that do one thing and only that. That is the Unix philosophy. We are very used to that thing. Imagine, for this antipattern, to have a Greenfield project. It's a micro-frontend one, so you spend months on analyzing and understanding how the thing is working. In this case, you have your micro-frontend that is loaded in one go. Then you have another portion of domain that you have multiple micro-frontends to view so you can mix and match those approaches. Then you said, if I want to communicate between micro-frontends in the same view, I'm using an event emitter. When I have to communicate between views, I'm using the web storage, so local storage or session storage. For loading and composing basically my micro-frontends, I'm using SystemJS.

That's all great, but suddenly a new requirement comes in. This new requirement is a legacy editor. The team is not available anymore in your company, there is just one person that has to integrate and maintain this legacy editor. You want to integrate this inside your new application, because writing from scratch the legacy editor would take a month, and therefore you couldn't afford that. Suddenly, you have this legacy editor that is written with a different technology and maybe has a different way to communicate with the external world that doesn't fit inside your architecture. You need to maybe create what is called an anticorruption layer between your application, so your micro-frontends application and the legacy editor. The beauty of this pattern is that basically you are creating a layer between your new architecture and the old world, and the translation is happening at the anticorruption layer.

Imagine that, for instance, in this case, you could have the legacy editor that is wrapped inside an iframe, and for communicating between the iframe and your application you use postMessage. The interesting bit of that is that if you have an application shell that is containing your micro-frontends, and you're using event emitter or other patterns for communicating, then the postMessage would be an additional one that you need to build in the application shell. Or you can create an anticorruption layer that is a micro-frontend that contains the iframe, and the communication is sanitized between your container of the legacy editor and the application shell. The application shell continues to work exactly in the same way, like it is a normal micro-frontend. The translation between the communication of the postMessage from the iframe to the external work is done by the anticorruption layer or the micro-frontend container. This is great because that means in the future, when you're going to replace the legacy editor, you don't have to delete code on the other parts. You don't have to change the way other micro-frontends are communicating with an old system.

That's another way to overcome a possible complication that you're going to have in the future. We all know that you can start with, yes, this is just this portion. Later on, further down the line, in six months' time, maybe you have three or four occurrence of these problems. Having an anticorruption layer could help.

The Dependencies Hell - (Do you Really Need that External Dependency?)

Another problem is the dependencies hell. I heard several times micro-frontends is a dependencies hell, what can we do? My question is, do we really need that external dependency? That's the reality. Imagine this situation, we started to work with micro-frontends, and we see a lot of parts that could be wrapped in what is called the core library. How many of you work with a core library? Suddenly, you have this core library and you have the version 1.1, that is the first one that is going to be in production, and is implemented nicely by all the micro-frontends. Suddenly, you need the 1.2 version. That 1.2 version, the team responsible for the micro-frontend A can immediately work with it, and can immediately implement it, but all the others are a bit slow because the backlog is pretty huge and they cannot take more work on board. In that case, you need to realize if you can live with this situation, and therefore it's not a big deal, or you must have all the micro-frontends using the same thing. That makes a lot of difference, trust me, because now we are bumping up a version in the minor sense, so we don't have breaking changes. When there is a breaking change in the library, that could be disruptive, because you need to coordinate multiple teams, maybe across different time zones, maybe across different offices, and it's not going to be an easy task.

There is another pattern that I have seen that is called the diamond antipatterns. When you start to say, the core team responsible for the core library is too slow, they cannot take into account the specific thing that are needed for different micro-frontends. The team responsible for micro-frontend B are creating the core library extended. That is basically taking some part of the core library, rewriting others. Now we have a fork, or an extension of the core library that is breaking some stuff. Now you have the core library 1.1, that is working perfectly and nicely with micro-frontend A and C, and also with the core library extended. When you start to have a new version of library, 2.1, and therefore there is a breaking change, this is a major update. What can you do with the core library extended? You start to diverge. Diversion, it could be a big problem, because in the long run, you will see that the core library extended, suddenly, it's a different library. It doesn't have nothing to do with the core library, original library, and you will spend more time trying to figure out if you can squeeze in your new core library extended, the new version of core library or not.

The problem scales with the number of shared libraries. If you have just one, maybe somehow you can handle it. If you have tens of them, it's going to be a nightmare. The reality for me is trying to understand if you really need these kinds of core libraries. There are situations like a design system, for instance, it's definitely needed. Having a library that you share is great, but then try to work with composition more than extension, and also try to keep very separated, certain libraries, and not do a library for everything. Because definitely you're going to have more problems, and more teams that are working on a shared library, then they need to test everything is working properly. At some point, you might save some time on writing the code in a shared way, but then you are going to have a lot of testing, manual testing, end-to-end testing, and everything that is happening everywhere. Bear that in mind when you are sharing some stuff in a distributed system.

A Return Ticket, Please - (Unidirectional Data Flow at the Rescue)

Another antipattern is a return ticket, please, or the unidirectional data flow. I have seen in many frameworks that you can have the possibility to have a container or a host that is communicating with the remote, so it's basically embedding the remote inside the container. Then you can do also share on the other side. You can have a host that is sharing a portion of itself with remote. These can be very complicated, it could happen just once, and once again, every day, is just once so it's not a problem. When it happens across the entire application, remember that you're working with a distributed system, it can be complicated to maintain, especially in the long run. What we have learned in the past 15 years in the frontend communities that, quite a few years ago, we had this concept of unidirectional data flow. That was introduced by Flux, that was a framework or state management created by Facebook, that changed completely the way we were implementing the communication between UI and the data model. Before it was bidirectional, you could have controllers, you can have other kinds of construct that could be bidirectional. In this case, you're going to have this structure that is unidirectional. The beauty of this is that, obviously, it's very easy to debug. I know exactly when I dispatch an action, and dispatch or take, update the store, update the view, done. You cannot do it any other way. That's the way it works.

Same thing was introduced in the Model View Intent architecture, or MVI, that is available in JavaScript. There is a famous library called Cycle.js. Now it's also used extensively in the Android native development, where you have a user that is triggering an action. Then we capture the action in the intent, we change the model or we pass the information that is needed for changing. Then we load the new state and the view is updating in the UI. That's great, because that basically, once again, we can handle very quickly the whole flow of data. Absolutely great and easy to understand.

What we learn with unidirectional data flow is, first of all, it is easy to debug. If we go on the first example that I have showed you, where you have a bidirectional sharing or communication, it can be complicated, especially because we are sharing across multiple elements, and everyone can share everything to everyone. It is going to be a nightmare. Also, it's less prone to errors, because I know exactly how the things are working. I know that there is unidirectional, so I can nail the error way faster, especially in production.

Relax, It's Just Code - (Avoid Organizational Coupling)

The other antipattern is, relax, it's just code. Now we start to talk about architecture and organizational coupling. I have discussed several times this antipattern. Many people are saying, I have micro-frontends. I create a global state with all the micro-frontends in my same view, so everyone can use that. I can share content. I don't have to bubble events across the application. That's true. At the same time, it means that you are coupling, in this case, four different teams. Because it's not just technology choice, this is also team choice. The fact that you have a micro-frontend A that shared a global state with micro-frontend C and D and B, it means also that when someone is changing the global state and maybe is changing the type of object that you're storing, the structure, the signature, whatever, it means that everyone has to be aware. You need to coordinate across the teams. Basically, you're losing the benefit of micro-frontends, so it is independence that you are looking for, for every single micro-frontend.

A way to solve it is using instead events, or a publish and subscribe pattern. It could be events. It could be event emitter. It could be a custom event. It could be a reactive stream. It doesn't matter. The thing is everything that could work in a Pub/Sub pattern is definitely your friend in this case. In this case what happens is that you are defining input and output of the micro-frontends. You are defining that micro-frontend A is dispatching an event and whoever is interested will listen. If tomorrow micro-frontend C wants to listen to a new event for adding a new feature that maybe is already there, but they didn't care about that event, they can do without coordinating with anyone. They know the micro-frontend A is dispatching that event, they just listen to it and start to implement the code. Similar situation, if you are the new team that wants to listen to multiple events, they don't have to coordinate. They'll read them in the Wiki on the documentation, what are the events bubbling that view? Then suddenly, they can start to work on this. That's exactly what you want to achieve with micro-frontends.

Let's Hammer the APIs - (Multiple MFEs Calling the Same Endpoint)

Then we have, let's hammer the APIs. There are situations, and I have experienced a couple of times, where you have multiple micro-frontends calling the same endpoints. It might seem ok, but then you need to understand the cascade effect. Imagine here that you have those two micro-frontends on the bottom that are calling API 1. The first question for me is, do we really need to split them up into different micro-frontends, because first of all, we know that we have two round trips of the identical API that is going towards the server? You're slowing down a bit the rendering, but that's less than a problem. The reality is what happens if you have two or more micro-frontends doing so? Imagine that you are using in the backend a distributed system, so microservices. You probably have an API gateway that is exposing your API. Then the micro-frontends are calling the API gateway. The API gateway absorbs these requests and authorize the request, so make sure that the user is entitled to see some content. Obviously, it might or might not be, they have the authorization service because you're handling in different ways. In this example, let's assume that. Now the authorization service has to scale for twice the traffic because there are two calls every time.

Then you call finally your microservice that is API 1, but because it's a distributed system, the API 1 relies on other APIs. Now suddenly, instead of having one request that is going to the API 1 and all the other APIs that the microservice API 1 is calling, they need to scale for double the traffic. If they are small numbers, it might not be a problem. If there are huge numbers, it is there that there are some problems. It's not only the complexity and the cost on having something like that, it's also the fact that you need to scale the entire flow. Yes, there are ways to mitigate with different levels of cache, or different ways to structure your code, or whatever, it's not going to be easy. Therefore, be careful when you do this integration.

A possible solution for that is going back to the whiteboard. Do you really need to have two micro-frontends, or you can have one micro-frontend only that is assigned to one team and is handling one request per time? Yes, a bit of more scope on the UI, but probably there are high chances that that specific call, or that specific API is inside the same domain. Otherwise, probably you need to go back to the whiteboard and review how your APIs are working. The other option is creating a container that is handling one request and having two components instead of micro-frontends that are potentially developed by different teams, but in reality, they rely on the fact that there is an external container that is handling the requests. You handle just once and then you have multiple components that are composing that specific part of the view. Those are possible solutions on how to handle this challenge.

Architecture is Always a Tradeoff

One thing that you need to bear in mind is that, A, we are at the beginning of this journey with micro-frontends, and still there is a lot to learn. Those are things that I have seen and suggested to different customers in different organizations and have seen them working. Architecture is always a tradeoff. There isn't right or wrong, there are just what works for your context. Therefore, it's very important that when you take one direction or another one, you evaluate what you're gaining and what you're losing for the specific decision that you're making. It is extremely important because sometimes you might realize that something that is working perfectly in a context in another company won't work in yours for several reasons. It could be organizational structure that is different. It could be the implementation details that are different. It could be the architectural characteristics that are different. It's very important that you are aware of your context before you take a decision.

Duplicated API Calls

What if we are going to have a global service that calls the APIs and contains the logic to prevent duplicated calls?

Definitely, you could think to implement that. The tradeoff in this case is that you have to create an API that has to handle the throughput of multiple calls, and especially when you go to the millions, it is going to be a challenge. Then moreover, you need also to handle the cacheability of the request. If there are two services calling the same thing, you are now creating the service, paying for the infrastructure, and also maintaining the logic in the long run for all these services, when instead you could fix the problem or the architecture part. In that case, you don't have to maintain additional code because we know code is liability. You don't have to pay additional money on the backend or any cloud service that you're using. You don't have to create a caching layer that when you need to have stale data there for over a certain period of time, you need to have also a logic for invalidating the cache, and so on. Technically, you can do it. Yes, of course you can. The drawbacks are the ones that I listed here.

Questions and Answers

Silz: The next one was about micro-frontend tooling recommendations. Is there anything specific there for tooling? It seems hard because you've got whatever mixture of frameworks on your page. How do you get any tooling there?

Mezzalira: In reality, specific tools for micro-frontends? I don't think there is any. I think that you can use almost any tool that might suit your use case. What I've seen majority of the time companies embracing this architecture, that they are creating their own internal tools, if needed. I can suggest a couple of things that I found useful working with micro-frontends. For instance, I know that there is a company called Bit that created a tool for visualizing the dependencies across different micro-frontends and different systems in general. It can be used with micro-frontends. It can be used with also single page applications. That doesn't matter much. That is one of them. I think that for all the rest, the usual suspects, so if you use for checking performance is Lighthouse to understand how the things are going, Chrome DevTools as usual.

Silz: Do you have any idea of how many companies out there actually use micro-frontends? If I go to amazon.com, do you think that's a micro-frontend? What's the adoption out there in the industry?

Mezzalira: The adoption is increasing, compared to a few years ago when it started. There is more awareness and more companies want to leverage the benefits of building micro-frontends. The independence between teams, the fact that they can de-risk the deployment because they deploy just a small chunk of the project iteratively, there is less coordination. Usually, my experience are enterprise organizations, or companies that have SaaS products. I didn't see many agencies that are embracing this approach, because obviously, at some point, the agency should provide the tool to the company. Therefore, the way they are structuring the code, the way you're affecting the code's internal culture might be quite disrupting. Instead, if it's an internal company that is building a product and knows that it needs to maintain for a long time, or if you're an enterprise customer, definitely there are more companies embracing that. An example that I think is very well known, is PayPal. PayPal is using the micro-frontends. Also, American Express is using the micro-frontends. American Express even contributed in the community, they created Holocron that is a framework for server-side rendering micro-frontends. Instead, PayPal already shared one or two talks on how they're implementing specific parts of micro-frontends.

Silz: Do you see any advantages or issues between server-side rendering and SPA style pages for micro-frontends?

Mezzalira: I think similar to what you would do with a non-distributed architecture, so you're using server-side rendering when you want to have great control on time to interactive. You want to have a better optimization for SEO, and you want to provide a better experience for low end devices, especially when you work on countries where the networking system is not as great as it could be in another country, like U.S. for instance that is probably one of the greatest for infrastructure. Those are the three for server-side rendering. On single page applications, if you want to have a cohesive experience, you can provide something like that, a more immersive experience. You can use definitely single page applications. I didn't see many differences of a way to tackle a project if you think about distributed system or not, those remain similar.

Silz: You still put the applications together the same, it's just whether they contain an SPA or an SSR, you don't really notice that on the answer, that's why you're running your little black box, and it's up to you what you put into that box, to your micro-frontend.

Mezzalira: In theory, it should be like that, but practice is another thing.

Silz: When breaking an abstract monolith, how do you make sure there is no duplication as you move towards micro-frontends? Starting with the big ball of mud, breaking it apart, your experience with that?

Mezzalira: I don't think duplication is a problem in general. It's important to reduce that, but you are optimizing for something different with micro-frontends. You are optimizing for independence and speed of delivery. If that means that the first iteration you will have duplicated code, it's fine. You are going to reassess later on. There are great companies that have made this as a mantra for how they develop their project. A classic example, Amazon, they discuss the two-pizza team approach several times. There is duplication, they accept that, because they are moving faster with this approach. Then they reassess if there is some reusability that could happen across different systems. It doesn't have to be an obsession going straight away in the optimization of your code. Because when you have global optimization of code and high visibility, you start to have external dependencies that basically is going against the front door to handle distributed systems independently. Because now you have to coordinate with multiple teams across a more complex environment, and more touching points you have, and more effort you will have for coordinating the teams across boundaries.

Silz: It really sounds like, number one, there is really no avoiding of duplication, but number two, that's not a bad thing because the tradeoff of duplication is independence for the teams, and that's really what you're shooting for. You want everybody to be on their own schedule, and if that means some people have the same library or the same style sheet, so be it.

When do you think it's actually worth considering switching to micro-frontends? Do you do that from the start, or is it an acquired taste that you have to feel a certain pain to move towards micro-frontends?

Mezzalira: What you are looking for is modularity. When you are embracing distributed systems you are looking for modularity. You can achieve modularity in different ways. It could be at the code level. It could be at the infrastructure level, or it can be in the case of micro-frontends, even the architecture level. The modularity, it's up to you to decide if it's pertinent at the beginning. Maybe you are a large organization and you want to start a Greenfield project and therefore you want to start it that way because you already have multiple teams to coordinate with, and you want to maintain independence since day one. More often, what I've seen is that there are companies that want to validate their business idea. Imagine a startup that wants to understand if the idea that they had is not backed only by the VCs, but also by the users. In that case, you need to move fast. I think micro-frontends could be something that could come later when you are in a stage that you want to refactor. Because the problem is not building the code inside a micro-frontend, it's the own ecosystem behind it that might be challenging. For microservices, the CI/CD part is essential. The fact that you have a unique logging system, a faster turnaround on understanding where the problem is when it is in production. Stuff like that are challenges that are not simple to solve, but at the same time are essential for distributed systems. The observability part of a single page application is usually less essential. It could be in a distributed system, it's definitely important, but you are not as granular usually as you would be with micro-frontends. Instead in micro-frontends, because you have multiple teams that are contributing to the same system, it might be a problem if you don't have visibility of the system and how it's broken in production.

Silz: If the micro-frontends are in different pages, is it still considered a micro-frontend, or do they by definition have to be co-existing on the same page, the same browser page that the user sees?

Mezzalira: The granularity of a micro-frontend is not defined on purpose. If you think about microservices as well, it's the same thing. How micro is a microservice? I can say, how micro is a micro-frontend? The reality, what you are looking for is something different. You are looking from the business perspective. What is your business entity? Don't be afraid to go too much coarse-grained. It could happen at the beginning that you decide to take a larger portion of the micro-frontend, and it's fine. You're always on time to move from a coarse-grained to a fine-grained implementation. That's not the problem. The other way around though is not true. If you go too fine-grained, it's way more difficult than sanitize and rebuild something larger. My suggestion as I described in my book, is always starting from domain driven design. How usually you approach a problem. You start from defining, what are the subdomains? Then you start to understand inside those subdomains, how many teams are working there, and how complex is the subdomain? Then you start to assign those things to the teams. It's very important that you don't start from a technical perspective, like you would do with components, but you start from a business point of view, where you start to understand how the context works. What are the different parts? How they communicate together. The size of the micro-frontends will surface to you automagically if you want, because you will start to see them from a different perspective, different angle, more than a technical one.

 

See more presentations with transcripts

 

Recorded at:

Jan 06, 2023

BT