Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Beyond Micro Frontends: Effective Composable Decoupled Applications on Cloud Native Infrastructure

Beyond Micro Frontends: Effective Composable Decoupled Applications on Cloud Native Infrastructure



Natalia Venditto discusses supporting infrastructure and how cloud-native and the Web Platform APIs are paving the way to push the boundaries of what was once known as the Jamstack and micro-frontends.


Natalia Venditto has worked in the roles of frontend developer, full-stack developer, technical lead, software and solutions architect. Now she leads the end-to-end developer experience for JavaScript and Node.js on Azure. Natalia is also part of the Google Developer Experts for Angular and Web Technologies, and Google Mentors programs, and an MVP award for Developer Technologies.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Venditto: I want to tell you a story that is all about decoupling a frontend application. I hope that when I share my story and my learnings, and by the end of this talk, you'll have a new perspective on building robust and scalable applications that can meet the demands of modern cloud environment, but also of the modern frontend ecosystem, and that you can do it from a frontend perspective. You'll see soon what I mean. In 2018, I was working as a frontend technical lead for a very large platform that had undergone many transformations over time, and that had been maintained and developed by multiple vendors. As you probably know, when a large development project is managed by different vendors over time, it's not uncommon for technical debt to pile up usually because the original ideas are abandoned, or they're misunderstood. That may end up end up transforming that project that was once a well-architected monolith, into a bloated and inefficient mess.

I don't know if you've been there too, and you know what I'm talking about. Certainly, most projects degrade over time. When they're handled by different teams, they do even more so. In this particular case, at some point, the effort required to analyze and fix bugs became increasingly unfeasible, and the cost of developing new features exponentially outweighed any potential benefits. Our team concluded that it was time to migrate. We had, at that time, two potential directions to explore, starting from scratch, or decoupling and modernizing the existing platform we were dealing with. If you look at the stats on screen of the platform I was working on, you probably already realized that the only feasible approach was to start splitting things apart, rebuilding them little by little, and recomposing. This story may resonate with you and it's not unique to frontend development at all. Teams worldwide are constantly lifting and shifting, or adopting microarchitectures to decouple services gradually. What may be novel about this talk is that I'll propose a different approach than the one you're used to.

Prior to the time of the event I'm telling you about, as a frontend tech lead transitioning to software architect myself, I had very limited knowledge of monolithic coupling. My understanding was that the only approach was to begin with the backend or services first, with frontend experts often brought in later to review specifications, to take a look at the designs, and align them with business requirements, but never early enough. The frontend was treated as a second-class citizen. Even when we succeeded splitting monoliths in the backend and adopting microservices approach, the frontend always remained a significant monolith. It's important to clarify that if a project has a strong modular strategy, and it also has a solid component library or design system that serves as a frontend monolith, or in a monolith fashion, that isn't necessarily a bad thing. In many cases, this approach is effective or even more effective than a micro-frontend strategy. However, in that particular case I'm referring to back in 2018, when we were decoupling this very tightly coupled content management system that was a very large deployment as well, it made more sense to split the new requirements into a decoupled frontend application. The requirement I'm talking about was a new blog capability. As we analyzed our platform to find the best way to add new features, it became increasingly clear that we had some user experiences that were relatively static in the frontend, while others were much more dynamic. You see, we had a homepage, we had the e-commerce, we had the user page, the landing page, all of those had different characteristics. Additionally, some of those experiences were updated more frequently from an authoring perspective, while others were being constantly authored. This realization was a critical turning point in our approach to monolithic coupling.

We started thinking about how users would interact with the page of a blog, and what their needs were, rather than just how the backend and the frontend were connected. By doing this, we were able to identify the specific functionality and data that the blog page required. We could build it using the appropriate tech stack and architecture in a decoupled way. We didn't need to be constrained by the existing architecture or technology choices made for the rest of the platform. This user-centric approach allowed us to create a tailored solution, which ultimately led to better user experience, not only for the blog, but every other capability as we were decoupling them from the monolith. In the end, we ended up deciding that the architecture would look like this, like what you see on screen. Of course, this is a very simplified, high-level overview. In general terms, it was our end goal. In closer inspection, what you see here is we have a container or a large container, where we deploy our main CMS application and additional containers where we're going to be deploying microservices as we split them apart. Of course, the orchestration tool here is not represented for the sake of simplicity. We do see we have another service there to the right, in blue, where we are deploying the blog capability, that then hits a function that then is routed to the data storage through an API gateway.

Now you're wondering, how do we accomplish this for real? This is just a high-level overview. How do we get this done and materialized? How do we think about it as frontend developers? Is it possible to participate of the architecture design and push for a user-centric approach to decoupling? It is if the end goal for everyone in the team and for every architect in the team is to build user experiences, and not just applications. By keeping a user-centric focus and considering cloud native opportunities, we can design fully distributed user experiences that solve business problems individually and incrementally in a way that allows an independent team to explore their options and make their own decisions and map them to a particular use case. That allows us to go from architectural models that are built to support the backend logic, or that are application centric, in essence, to using an API-first approach that made the API surface the center and the heart of the system, in a way that pumps life and data from end-to-end, and makes every component on each end of the stack pluggable and replaceable. By taking this API-first approach back in the day, each frontend for us became just another client of our API that you see there in red. It didn't matter what technology we were building each frontend with in the end, as long as the API contract was well-defined, any type of client can consume the data and the functionality provided by the API.

In the scenario I'm describing to you, we started with the addition of this new blog capability, and ended up with a new definition that allowed us to build and evolve each frontend independently from their own without having to worry about the impact on other components in the system. By using cloud native technologies, additionally, we were able to scale that API horizontally and make it highly available, ensuring that it could handle any amount of traffic and load. You see with this approach, the API surface becomes a central connection point between the frontend and the backend. It allows for flexibility because you can continue to plug more frontends as you go, and agility in development in the end. The development process becomes a lot more agile. Overall, adopting this user-centric approach to frontend development and to API-first architecture helped us build more effective and scalable solutions, all in composition. You can see that here we are having a different architecture approach for the homepage that was part of that main CMS, than we do for the blog page that in the end became a static site generator that was hitting against, again, this serverless function in origin and requesting data at build time to be rendered. Then we decided later on that we may use a hybrid approach for e-commerce, and a server-side rendered approach for the user page. We could integrate more innovations like edge computing, and requesting data on the fly.


My name is Natalia Venditto. I'm a Principal Program Manager at Microsoft. I'm leading the end-to-end experience for developer tools and services in Azure, for JavaScript and Node.js developers.

Why We Decouple Composable Systems

There's also one more thing I want to delve into, and it's why we decouple composable systems. Let's pause for a moment and ponder why we do it. We always decouple to satisfy some organizational performance strategy, or to better organize the capacity of our teams with respect to the business units and their needs. At the end, we may be satisfying non-functional requirements as well and end up with a better technology stack. That's never the main reason why we decouple. Typically, one of the most important requirements that is about organization, and that ends up impacting in a positive way each decoupled part, is the ability to release and deploy application components, services, or modules independently of one another. That obviously has many advantages, like faster time to market, reduced coordination overhead. With independent deployability, teams can release new features and functionality more quickly, and scale also individually. Services don't have to wait for another service to be integrated into a main branch, for example. These results are giving more agile teams faster response time to customer needs, and high-quality applications that are developed and deployed more efficiently. Let's now ask and answer some essential questions so we can proceed with design and execution. We need to ask ourselves, of course, what are micro-frontends? Secondly, we need to identify our user base. Thirdly, we must understand API-first from a user point of view or user experience point of view. Last but not least, we need to become acquainted with the cloud services mechanisms and infrastructure that are relevant to our work as frontend engineers.

How Composable Frontends are Built

Through my talk, I will avoid as much as possible the term micro-frontend, because we're speaking about rich decoupled applications in the cloud. Then I'm going to refer to them as composable decoupled frontends. What are those? They are pluggable and exchangeable frontend applications that have connectors and hatches to share state and integrate via a dedicated vertical integration surface as part of a much larger deployment system, typically in the cloud. How are they built? Micro-frontends or composable frontends can be integrated into the system using two types of splits, horizontal, which is multiple applications in one page load, or loading and being bootstrapped in one page load, or vertical, which is one application that usually maps to a URL or a page load. Let's think about the horizontal split, orchestrated with a tool or a technology like single SPA or any other technique, like for example, module federation that is useful at runtime, and integrating multiple frameworks in one page or view, like we mentioned. Or we can be using also a single framework and be leveraging island architecture, a technique we'll discuss, or we can be using web components and mix them with other frameworks using iframes, for example. They can be as tiny as a button connected to a serverless function, and triggering an HTTP request to fetch and display data, or it can be a whole catalog, a shopping cart. The layout of a page can be managed as a micro-frontend or decoupled frontend. It can be a fully-fledged application capability that is integrated in a view together with other applications. As you have noticed, they can be micro or they cannot be micro. This is why the term micro-frontend doesn't resonate with me too much.

On the other hand, a vertical split can also have combined elements and they can radically differ in size and implementation. What we can demonstrate here is that that vertical pattern, where the micro-frontend loads entirely in one URL, or page load, or view, and the horizontal split, where we have multiple micro-applications orchestrated in one view, or route or page are not mutually exclusive, and they can be combined. That a micro-frontend does not necessarily map to a microservice one-to-one, or has a single concern. For example, if our vertical split that represents, in this case, let's imagine a search capability, would have querying and filtering and data representation features, and that same capability would integrate multiple services like a catalog and an authentication service or aggregate data sources. This is why the term micro-frontend feel less correct than composable frontends, because if you think about it, we are composing multiple applications that may vary in size.

I know that many frontend developers are not familiar with techniques or methodologies to make architecture or technical decisions or definitions. We have data that says that frontend engineers are most likely to come from different backgrounds that includes computer sciences, but they may come from any other background, other engineers as well, but frontend developers are the most likely to come from a different background. They may not have the baseline to be able to produce a decision matrix for composability. I will propose a methodology that we can use to influence our teams and architects in our teams, to architect for the user or for the user experience, and to build frontend applications in composability. I will offer you this framework that is based on three focuses, design, development, and delivery to ask and answer a series of high-level questions that when answered will guide you through making decisions to build frontend applications as cloud native components. When you have a decision matrix, you will be ready to participate in architectural discussions with solid argumentation to your favor.

Architecture - In Which Way?

For that, we will establish three categories as we mentioned before, that will be revealed when we color all slots. We will start with the purple slots, and for that we will take the question in which way as our starting point. We already answered this question before, we want to take an API-first approach. What does an API-first approach mean when we are architecting for the user? There are some proponents who prioritize designing the API specification before the user experience and frontend designs are completed. However, with this approach, that reasoning may conflict with a user experience focus type of implementation, which prioritizes starting with an aspirational visualization of data and the desired state in the frontend, and then later specifying and executing APIs with the user in mind. In other words, when we design for the user, we always begin with a clear understanding of how we want the frontend to look and function. Then we walk backwards to create the API to support it. If we've always been focusing in frontend development, writing an API specification and implementation may be challenging, and one tool that can help with designing APIs in this way is the OpenAPI spec. A specification for building APIs that is machine readable and human readable, and can be used to generate documentation code and other artifacts that are going to be useful later in the cloud environment. By using OpenAPI, you can more easily map the frontend functionality to specifications, as well as validating and testing them to ensure they are working correctly. There are other important aspects of designing a good API, for sure. As a frontend developer, you may be familiar with querying different types of APIs, RESTful, GraphQL. The API is the heart of this type of system, and we need to really focus to choose wisely our pattern.

Architecture - For Whom?

Building great APIs that allow the system to connect the user interface with data storage requires also some knowledge of who we are building the user interface for. This is absolutely not a trivial thing. For whom may be the most important question to answer. It's not actually only for whom, it's rather for whom, that is where and browses how, and for what reason are they browsing? When we design and make decisions for user interfaces, we deal with the unknown. We are not only designing for people with different abilities and needs, we are designing for people who use a diversity of devices by preference or because that's what they can afford. With very different capacities, and even screen sizes and resolutions, or may not even have a screen. It's assistive technology that is talking to the user. Not only that, when we design for enterprise, platforms tend to be really large and solve a variety of problems in a centralized way, like we saw before. The user that comes to our platform, let's imagine it's a bank, to read the blog that we were describing earlier, this blog capability. To, for example, learn about stock prices, or find a contact number, does not have the same needs and expectations and urges sometimes as the one who visits to do online banking. There are variables unknown, like mentioned, but we also have a lot of data, it's no longer a surprise that most internet users browse from their phones and not from their desktops. That global sales for smartphones, although in decline for the last couple of years, surprisingly, they have grown by a flipping 1000x, from 2007 to 2021.

What may come as a surprise to many frontend developers is that only 28% of the global population has an iPhone. It doesn't even mean that is the latest iPhone. Most developers I know and even myself that are working from EMEA and the UK or the U.S., they develop and test on a brand-new iPhone. The computers we use for development in these regions tend to be high-end, and the connection speed, the best one can have. It's important to remember that the synthetic testing results we attain when testing our development on these super-fast machines and connected to 5G, or fiber, are not what the average end users globally will get. The average low-end phone that most users will be browsing at a given time, is very likely to be a $200 Android device. Most of those are not even 5G compatible. Since we are speaking about 5G and internet speed, we have to remember also that this user base is also probably scattered around the globe, with different internet access rates and constraints. Sometimes ourselves and even our customers are convinced that their customers or their user base is only in Europe or the States or in places where 5G and fast connection speeds are available. Cloud providers are moving workloads and execution to globally distributed content networks with points of presence around the globe. That makes any application accessible everywhere. We need to be prepared to consider that there are remote users, and those remote users can be an opportunity for expansion and new market possibilities for our customers and ourselves.

Development - With What?

Now we have the UX and UI settled, that we have discussed the specification and the user experience or knowing the for whom so we can better design user experiences. On top of that user experience and user interface design, we want to deliver a great application experience, we can already start making technical stack decisions. Those technical stack decisions help us decide with what we're going to be building each decoupled application for our users. What are the most effective patterns and implementations that will help us deliver with performance in mind? Remember that everything goes back to delivering a good user or a great user experience. That's the domain of the frameworks. Also, it's the domain of the web platform. This is where we need to know what the latest advancements our ecosystem is working on to help us build better applications, while we respect the constraints of the users we may have at a device or connectivity level. Why? Because, again, performance matters and those numbers matter. They matter because a lot of the most reactive frontend experiences of today are probably experimenting a bounce rate increase of between 32% and 90%. That's a lot of money lost for our enterprise customers. Interestingly enough, Google tells us that the slowest industry to load pages is the financial industry. If you're working in the financial sector, you know that to roll out worldwide you need to know your user base, and all their constraints as we proposed before.

A lot of the times the largest negative impact to runtime performance is the amount and size of assets we request during page load. When we have a user-centric approach in mind, and we use it to provide a solution per use case, we can avoid shipping code and assets over the wire that are not meaningful to that specific use case, or that are not relevant to that specific use case. Like we said before, maybe the banking experience needs a lot less static resources or assets than we ship with a blog or landing page. Knowing our user and industry performance benchmarks will also help us make the best technical stack decisions. For that we need to know what's in store? We need to know what the modern frameworks are, because those decisions need to be linked to a strategy that satisfies performance budget, like we mentioned before, and caters to all users. The frontend ecosystem is very dynamic in nature and is constantly working to improve. Sometimes it's working to solve problems we have introduced ourselves. That's another reality. Do you remember this slide where we explained horizontal split and how it could be multi or single framework? At the same time, we decide how we're going to design for a particular frontend solution for a specific use case with a user-centric composable approach, we will have to decide a render strategy, and potentially a reactivity pattern. We will also have to deal with making decisions that are not very easy to make. I think that the most challenging aspect of composing decoupled application is dealing with state management and routing. Although, obviously, data fetching mechanisms are very close behind.

What we want to do, is we want to use HTML-first and zero JavaScript frameworks, when possible. We want to leverage the platform APIs and reduce third-party code and dependencies. We want to defer or async load all render blocking JavaScript, particularly keep the critical rendering path lean. We want to define and respect performance budgets, and obviously follow JavaScript best practices, like named imports, so we can optimize code at build time, or compilation time. What are those new generations of frameworks that I was referring to earlier? These frameworks come equipped with mechanisms to leverage modern rendering patterns and architectures like the island architecture, promoted by many of them, based on the concept of partial hydration. Hydrating, so we have an overview, is a mechanism to bootstrap JavaScript into a completely static HTML render. What we do is we render the HTML, we load it, and then we bootstrap the JavaScript. With that, also, we inject the state and the dynamic functionality. With partial or progressive hydration, we only bootstrap JavaScript to some areas that become highly dynamic and hydrated on the client side at runtime. In frameworks like 11ty, Astro, or Fresh working on top of island architectures, proposed for every one of those islands, every one of those tiny regions to be hydrated independently instead of depending on a shell that controls that mechanism. Also, additionally, most of them serialize state before sending it to the browser, so everything becomes leaner, and there is a lot less execution on the client side.

Another very interesting framework that proposes an even more advanced concept is Qwik. Just like the other options discussed, this framework tries to remove as much JavaScript execution from the client side as possible. Because while a hydration pattern renders all server side and then bootstraps JavaScript to the dynamic regions to inject the state, which may include a potential visual glitch, resumability picks where the server left. Meaning, you execute as much JavaScript as possible on the server side, and then serialize everything and ship it to the frontend, and resume the execution where it was dropped by the server because it needed information that is only available on the client side. The absolute next stage are pure HTML-first frameworks that need no compilation step. Developers, in this case, when using HTML-first frameworks approach, will be writing and shipping the same HTML to the user., for example, uses a functional web approach. Most JavaScript computing and execution happens in a cloud function and not in the browser. You probably have noticed here how the connection between the modern frameworks and the cloud is established. Server-side rendering, so things happening in the server. Execution of JavaScript happening in a function. We are moving our intensive operations away from the client and to the backend, in most cases, with the cloud in mind.

Development - Where In, and to What?

Before we also move to the cloud, let's see what other questions we can immediately answer. The final questions in the pink slot are, where in, and to what? If we have already made a technical stack choice, we can probably confidently define the required setup, our code structure and dependency management, since it will be strictly linked to our tech stack decision. When we decide our framework, we can also decide how we're going to lay out our code. How we'll be working with it. What will be our IDE. What will be our developer toolset. We can also start responding to fundamental questions about our integrations, answering the to what. To what are we connecting? What services we need to talk to. Where from are we fetching our data with those amazing APIs we designed? Probably, how will authors create the content? How will we analyze and observe our system to guarantee it is always healthy and performing according to all those benchmarks we established as best practices? This may also be a good time to expand our testing strategy from unit testing to integrations, end-to-end.

This may be also a great time to think about orchestration and optimization with dependency management, tree-shaking, dead code elimination, bundling, compressing, everything to ship better and faster code, and orchestrate better. When it comes to code optimization for bundles that we are going to be loading in composition, most bundlers can only perform static analysis at build time. That makes it impossible to optimize the code of bundles that are independently and remotely deployed at runtime. There is a Webpack plugin called Module Federation, which proposes a mechanism that is based on the concept of having a host runtime, a remote container. Those roles can be interchangeable depending on which runtime you log first, and a shareScope that will allow those different runtimes to share dependencies and perform that static analysis basically at runtime. Aspirationally, this mechanism is very interesting. It may not be possible to fully leverage it if we don't have a very strong governance in the end, because if we cannot align on what dependencies we're going to be using, for example, our framework question, we may end up with version skew. Failing in isolation may not be possible in the end, if we are composing horizontally or in the form of a horizontal split, because when we have precisely multiple applications being loaded in the same view, if one of those fails, it may completely impact the whole experience. If we are architecting for the user, these types of mechanisms need to have a very strong governance and some definitions to be successful.

Operations - Where To?

We're now in the final stages of questions for our methodology to building frontends and composition. The blue slots map to operations. We made a lot of decisions that helped us design, specify, select the technical stack, define integrations. Where are we going to be deploying our composable application to? That still needs to be answered. This is where the cloud knowledge comes in handy. By learning about cloud infrastructure, services, models, we as frontend developers gain a deeper understanding of how our code fits into the larger picture, into the larger system, and how it interacts with other components. This knowledge also enables us to design and architect applications for the user experience that are more scalable, like we mentioned earlier, that are flexible, that are cost efficient. That we can publish and continuously integrate to with relative ease. To determine the appropriate service to deploy our decoupled frontend, we must consider our containerization and container orchestration requirements. However, because containers are a topic that is not very familiar to frontend developers, this does not imply that JavaScript decoupled micro-frontend applications can only be deployed to containers and that you have to know about things like Docker, and that you now need to know how to orchestrate with Kubernetes or anything like this. In certain instances, they can even be deployed to object or blob storage. However, this is not feasible for frameworks that feature server-side rendering and rehydration, like we were explaining before, because those need or require a backend runtime. It's fine for static applications, but when we are consuming and when we are leveraging the server-side rendered component or strategy, we require a backend runtime. If a team building a decoupled frontend does not want to develop and maintain a containerized backend runtime such as, for example, Fastly or Express server for Node.js, they can also use cloud native options as execution context.

However, I would like to emphasize the crucial point that serverless infrastructure is entirely managed, and that it has the ability to scale down to zero. Serverless infrastructure eliminates the need to provision and maintain servers, which is specially appreciated by frontend developers with no infra skills, and those who want to concentrate on writing code while also scaling down to zero when there is no need for computing. At a very minimal, in order to better design applications using meta-frameworks with server-side render or hybrid server-side render plus static site generation capabilities and implement an effective hydration strategy, we as frontend developers should understand the benefits of serverless functions executed in origin, and those that are executed at the edge of the network. What are their pros, and what are their cons, and how they differentiate, are important aspects to understand, especially in order to compose at the edge of the network. Alternate runtimes, such as Wasm, or WebAssembly and WASI, or Web Assembly System Interface shim, also provide the ability to execute code and integrate more closely with the user. The web platform API and cloud event and messaging streaming services enable data streaming, which in turn facilitates the creation of highly dynamic composition, from browser to cloud and cloud to browser. It's worth remembering, though, that fast compute makes sense when there are transfer protocols and infrastructure that are just as fast. As we already discussed, most of the phones our users are browsing with, are not built for speed, or connected to fast enough networks. If we think about 5G global deployment, it has only 25% coverage worldwide. The same coverage as 4G will only be there by 2027. Design with the user in mind and those elements that they have access to, to facilitate the delivery of these very fast applications.

Because we know that data matters, data is everything for our applications, especially when we continue to move forward in the direction of huge amounts of data collections, to an estimate of 200 zettabytes in 2025. We need to learn to work with data in ways that we don't completely deteriorate runtime performance. That can only be done by choosing the right database model to match each use case we're building independently and composing because every database serves a use case or several. A database may fit all of your use cases across a system. It may be used by different teams. You can also consider having multiple of them and perhaps connect them in the system using an event-driven pattern. You can deploy different databases that are connected to a composable frontend, and then basically dump all the data to a sink and consume in an event-driven fashion. Event-driven architectures benefit from data streaming, and event models in the cloud, and also in the browser, so we can use event grids or hubs to produce and consume messages across a whole system. We can use publication and subscription buses, also natively in the cloud and in the browser, like, for example, in the browser, the postMessage API. We can also consolidate endpoints in the cloud with an API gateway, and use those gateways as proxy to validate tokens for browser to cloud, for example. Again, knowing the cloud native infrastructure and what it has to offer is essential to building better composable frontend applications.

Operations - How?

At this point, we have almost all the definitions to be successful with decoupled frontends in the cloud. We have made decisions from developer setup to cloud services. The only missing piece is how are we going to go from code to cloud. We have most of the down together, and in this case, because of the low-level nature of provisioning and deployment mechanisms, we will need to decide how we want to make those provisioning and deployment decisions repeatable. When dealing with large and more intricate system distribution and composition, it is advantageous to learn a single declarative and repeatable approach to configuring, provisioning, and deploying the required services and artifacts to operate, secure, monitor, cache, and distribute applications. For small to mid-size applications, the provider may handle that for us entirely, but as the application grows, it is essential to have a comprehensive understanding of the configuration and provisioning and even deployment process. We may just never do this as frontend developers, but there are tools that can significantly help to get started if we want to.


After all this work and all the colored slots are there, we should have a matrix of tools and technologies completed. That matrix will map to all stages of our development cycle. Like we explained before, the focus is for design, development, and deployment or delivery, which in turn makes possible the architecture, development, and operations or DevOps strategy for agile teams, and helps us go from idea to application and to publication in the cloud, especially for highly agile teams working with composability or composable architectures, also in the frontend. If you want to know more about decoupled frontends, the architect for the user experience approach in cloud native, visit my site,


See more presentations with transcripts


Recorded at:

Feb 21, 2024