Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations The Web's Next Transition

The Web's Next Transition



Kent C. Dodds discusses how the transition to the next version of the web will impact user experience, the development productivity, and business goals.


Kent C. Dodds is a world renowned speaker, teacher, and trainer and he's actively involved in the open source community as a maintainer and contributor of hundreds of popular npm packages. He is the creator of, EpicReact.Dev, and He's an instructor on and Frontend Masters.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Dodds: My name is Kent C. Dodds. I am so excited to talk with you about the web's next transition: where we came from, where we are, and where we are going. This talk is a journey through time as a web dev. A bit forward looking, and there's a bit of a game that we're going to play, I'll tell you about that. This is not comprehensive. If you've been doing web dev for over a decade, you're going to see some stuff and hear some stuff in here that you're like, that's not quite how I remember it working out. I do feel like I've been pretty fair to the history. The core concepts of history are actually really important to our forward looking that we're going to be doing in this talk. Let's also reference the blog post on It's called the web's next transition. This is the blog post form of this talk. If you see a chart in here, and you're like, "I want to look into that a little deeper," you can go take a look at it there.

Web Architectures

HTML 1.0 was never standardized. The first standardized version of HTML was 2.0 in September of '95. JavaScript came three months after that, and then HTTP came even after that, which blows my mind. Then CSS was a year after JavaScript. It's possible that you could say you've been a JavaScript developer for longer than CSS was invented. In fact, I have a friend who says that. Ryan Florence has been doing web since before CSS was invented. This just blows my mind that this is how that rolled out. That's how standards work. It just like bubbles up and happens like this. This was 25 years ago. What's even funnier though, is XMLHttpRequest wasn't standardized until 2016, a full decade or more after people were actually using this approach to build their applications. It blows my mind, honestly, what this says about our industry, using things that haven't been standardized. I think that it's really cool that we've been able to build web applications on top of what has evolved over the last 25 years.

My point of all of this is that you've been able to build web applications on the web for over 25 years, all the way back with anchor tags to link to different pages, and forms for users to submit data to your application. That's all you need to build a full web application. Over the last more than 25 years, we've had different architectures. We're going to talk about these. First, a multi-page app. From the very beginning, everything could only be a multi-page app. Then we got some more capabilities, and so we could progressively enhance these multi-page apps to make them even better. Then single page apps became a big thing for very important reasons that we'll talk about. I think that we're gearing up for a new transition. We're seeing it more every day. We'll talk more about that. The game we're going to play is follow the code. In each one of these architectures, I want you to pay attention to where the code is being run, who's writing the code, maybe think about that as well. The code that we're going to be following is persistence. The code related to saving and retrieving data. Routing, so taking a URL and sending it to the right code for it. Data fetching, so communicating with the persistence layer to get the right data for a particular page. Mutations, so communicating with persistence, like it's the bridge between the user's form and the persistence layer. Getting things in the right place there. Rendering logic, generating the HTML that the browser is going to render. Then UI feedback. When the user does some interaction, giving that user some feedback. A spinner, for example.

Multi-Page Apps (MPA)

Let's follow the code, and we're going to start with multi-page apps. On the client side, just UI feedback is here, and it's in gray, because we actually don't write this code. This is going to be stuff that is built into the browser. The favicon is spinning, maybe a little indicator on the bottom of the page will say what resource is being waited on, whatever. When we write it, the UI feedback, or the box will be green for us. On the server is where all of the code lives, persistence, routing, data fetching, mutation, and rendering, everything lives there. Let's look at a couple of requests that the browsers are going to make, so that we can analyze a little bit what the characteristics of this particular architecture are. First, there's the document request, user enters a URL, what happens? The server is going to get that request. They'll look at the URL to know what code needs to be called to fetch what data. It'll communicate with the persistence layer. Then that data gets fed into the rendering engine. You'd have a template of some kind to generate the HTML that will be served to the browser, all the while the browser is rendering UI feedback so the favicon is spinning. Then finally, the browser renders the HTML, request all the images that were in the HTML and all that stuff. Pretty simple, straightforward.

Then you also have the redirect mutation request where the user submits a form that goes to the routing layer on the server. Mutation is handled, persistence is talked with, and then mutation sends a redirect response. This part is actually really important. If you've ever seen a website where it says, don't hit the back button, or when you hit the back button, it pops up a little dialog that says confirm resubmission. This happens when instead, data mutation sends back new HTML. By doing that, that adds the post request into your history call stack. When the user hits the back button, when they get to that post request, they say, I got this HTML by submitting a post request so I have to submit that post request again to get the HTML again. That's not good. The industry created this pattern of post redirect get. After your post request, the backend would send a redirect response, like a 302, for example, with a location header that said, this is the page you should go to. It could even be the same page. The fact is, by doing that, that post request doesn't end up in the history call stack, and so you don't have the problems with hitting the back button. If you're building an MPA, please just implement the post redirect get pattern, and then you don't have to have that little text that says, don't hit the back button. Data mutation sends a redirect, the browser says, ok, let me go and get this new page. Everything from here on below is actually just a document request like we had before. Pretty simple again. What's cool about this is that the data fetching happens here again, and so we don't have to worry about making sure that the user has the latest data that they just mutated or anything, because we're always going to get the latest data as part of that post redirect get process. We're always going to render that latest data.

That's it. It's actually one of the main pros of the MPA architecture is that it has a really simple mental model. With that simple mental model, comes a lot of nice benefits. A lot of you who've been doing this for over a decade, probably lament how easy it used to be, you're just like, it can't be that easy these days. I would convince myself that it's because our users are asking a lot more of us, like they expect a lot more. While this is true, I think there's something to this mental model that would be really nice to have again, so simple mental model. It was also the only way to do it, that's why everybody did it. A couple of the cons. You get a full-page refresh. Can you imagine if we were building Twitter, and it was an MPA architecture? Then we said, I want to be able to make it so that people can favorite tweets. They can put it in their favorites or something. We put a little star or a heart, and then they click it, and now it's red, and it's beautiful. You implement it, and every time somebody favorites a tweet, they get a full-page refresh. Nobody's favoring tweets, because it's a terrible experience. Yes, full page refresh, not good. Certainly not for the app-like experiences that we eventually wanted people to have.

Progressively Enhanced Multi-Page Apps (PEMPA)

Lack of UI feedback control was also a significant problem with multi-page apps. While we've had JavaScript clearly since just a few months after the creation of HTML, or the standardization anyway, we just never really used it for UI feedback control until we got into the PEMPA architecture. This is the Progressively Enhanced Multi-Page Apps, PEMPAs. What this means is you keep all of the existing code you had for your MPAs, so that's your baseline of a functional app. Then, on top of that, you build additional code that makes the experience better. This is actually really key to the idea of progressive enhancement is that you have a baseline of something that functions. Then you use the features of the browser, for example, being able to execute JavaScript to make that experience better. There are a lot of cool things that come out of this, not just user experience. The client side has all brand-new code, the server has all the same code it had before, but also some new code to account for this new client.

The document request for a PEMPA is almost exactly the same as an MPA. The progressively enhanced multi-page app, it still has to have that baseline and functional app, and so the document request is going to be the same. The only difference is that we have JavaScript that's going to load when the browser renders that HTML. That's it. Now we have some new capabilities that we didn't have before. Remember, we had document requests and redirect mutations, we now also have client-side navigations. The user clicks on a link, we prevent default, and have a client-side router that says, I know where they're going, let me go fetch the data that they need. That data fetching logic is going to call the server and it's going to call a new endpoint that wasn't there with our MPA. This is a special endpoint just for our PEMPA, that is going to serve up some JSON. It's actually data fetching, goes through persistence, comes back, goes through our data fetching logic on the client. Then that gets rendered, all the while now we're in control of UI feedback, we can show a loading spinner or whatever we want, the skeleton UI, whatever is necessary for us. That's a new capability that we didn't have with the MPA. That little yellow rendering box, keep an eye on that. That's going to cause us problems later.

Then, inline mutation requests, another new feature that we couldn't do before, this is favoriting the Twitter tweets, or the Mastodon Toots, or whatever. The user submits a form, we can show a little inline spinner or something, or in the case of Twitter, they instantly turn the heart red, optimistically hoping that the favorite is going to work, and 99% of the time it does. That works out. Then, we call into our data mutation code on the client, which is going to call some API endpoint that again, the server will handle with its routing code, call the data mutation code on the server, talk to persistence, come back, and then render the updated UI. Now the tweet is favorited, or whatever. A new capability, really awesome. Again, that little rendering yellow box is going to cause us some problems. The redirect mutation request, so we do still have this capability, and this is very useful for stuff like you're creating a new GitHub Issue, you hit Create, and it's going to take you to a new page. The way that this works is we can show some UI feedback, you can either instantly navigate over to that page and show a skeleton UI. Sometimes that's useful. In the case that we're talking about with the GitHub Issue, that's not how we do it. We click on it, and you could show like an inline spinner or like creating... or something. We call our data mutation on the client, which is going to call an API route on the server. That'll get routed to data mutation code on the server. You go through our persistence layer, and then come back to our client-side mutation that will say, ok, the mutation is done. Now we're going to route them to the new page, to their new issue they just created. The router on the client will take care of that. It will call the data fetching code, which will call an API route on the server, and then that goes through data fetching and persistence on the server and comes back. Then, finally, we can render. This actually gives us a much better user experience than we had before. We no longer have refreshes. We get UI feedback control. This is awesome, really nice benefits.

We had a couple of problems. First, prevent default. What we're preventing is a lot of stuff that has gone into the browsers through all the standards processes. We are just turning off all that stuff and saying I know better. Of course, we have to do that to be able to get these pros. The problem is that we don't actually normally think about all the things that the browser is doing. What do you do if the user submits the form twice in a row really quick, or they click on this link, and then they click on that link, or there's a race condition on the requests that are going on, which one is going to win? We don't often think about these things. The browser has already thought through all those things and has some consistent behavior that it will do in those situations. If you don't know what that behavior is, then I encourage you to pull up the devtools on a page that doesn't have any JavaScript on it and click on a bunch of links in order and submit a bunch of forms, and just see what the browser does. Race conditions are difficult to simulate. The browser does a lot of really neat things that we don't often take into account when we're preventing default. If only we had some browser emulator or something like in JavaScript, so something that would behave just like the browser does so we don't have to think about this. That's foreshadowing.

Another thing is, we have more custom code. Again, we have all the same server-side code, but then we have more server-side code to handle this new client-side code that we have. I'm no data scientist. I know that correlation doesn't imply causation. I'm pretty sure that the more code you have, the more bugs you're going to have. There's a correlation there, I believe. It's not necessarily a bad thing. I understand you want more features, you're going to need more code most of the time, but that also was a problem. Here's the big one, the really big problem that progressively enhanced multi-page apps have, and that is code duplication. Let's think about a GitHub UI for adding a new issue, a comment. You're on a page, it's got like three comments on it already. By landing on this page, the server had to have a template for each one of those comments, and not only the comments, but also like contents of the comments. You got code blocks, and all sorts of things in there. There's definitely a pretty complicated template that's managing that in the Ruby code. Then, you go to enter in your own comment, and you hit submit, that's going to do a request. Again, we don't want a full-page refresh. When that request is finished, and we need to render that new comment that you just made, we have to have that template on the client as well. There is code on the client for rendering issues, and there's code on the server for rendering the comments on those issues. This just is everywhere in PEMPA. You have not just like the UI aspects, but also validation. There's a lot of code duplication with the PEMPA architecture. That's just the way it is. It's that. This caused a lot of problems. Code organization, also a bit of a problem. jQuery spaghetti was a really common ailment back in these days. Then there was a lot of server-client interaction. We had to work through a lot of stuff to make that less of a problem, but it still is. If you've got a client that's talking to a server, there's no real contract, no enforceable contract that you can have that says, this server has to respond to this client in this way. There's been a lot of tooling created to try and make up for the fact that there's no real way to know for sure that the server you're talking to is the one you're expecting, and has response in the way that you expect it to.

Single Page Applications (SPA)

For these reasons, mostly the code duplication, we moved on to Single Page Applications, SPAs. What this did is, remember, you had your server, and then you progressively enhance with a little bit more server code, and then all this client-side code. What they did when we moved to SPAs, is we took that core server that was there to make sure that things work as a baseline, and just deleted it. Now all we have is the client-side app. Then we have the little piece of the server that was built for the client-side app. It's just REST APIs now. This solved the big problem, which was we got rid of the rendering layer on the server side. We said, there's a lot of code duplication, let's just delete all of that. Now we don't have code duplication, and we get our single page apps.

Let's look at the implications of this architecture. First off, the document request, this changed dramatically, so different. The user enters a URL, and we go to a server, probably a CDN, to get a static file. Now, whether or not that static file is pre-generated or pre-rendered, lots of static site generation, things would generate a bunch of these static files for every route and stuff. It's probably going to have some dynamic data in most apps that need to be loaded in by the JavaScript. The browser will render that HTML. Then when the JavaScript loads, the JavaScript can render out all of the skeletons and whatever it needs to, while the JavaScript goes into the routing. Then, once we get to the routing, sometimes, for most modern apps we're code splitting, so we're taking our really big app, and we're separating it into a bunch of different chunks. This just makes things load a little bit faster. Because if the user lands on the Twitter homepage, they don't need the settings page code until maybe the user goes to the settings page. We split it up into smaller chunks. What that means, though, is that when we get to our routing code, and we start rendering stuff, we have to go and fetch more code. We actually end up in a little bit of a cycle right here, including the data fetching piece, until we finally have everything we need. What this results in, and you see this everywhere, things popping in place, little spinners like all over the place, banners popping in. We call this content layout shift. It's a terrible user experience. Through all this, we're calling in some data fetching code that goes to call the server API endpoints to talk to persistence and go through data fetching. Then, finally, when it's all done, we render. Again, this can actually all be a cycle of things that happen several times on this document request. Your banking website probably does this., that also does this. There are a lot of sites that we use every day, that implement this architecture. This is the experience, and it's so bad. We've got to stop doing this.

What's really interesting is the client-side navigation, this did not change. This is exactly like a PEMPA. Inline mutation request, did not change, exactly like a PEMPA. Redirect mutation request, did not change. Exactly like a PEMPA. The only thing we changed by transitioning from a PEMPA architecture to a SPA architecture is we made the document requests a lot worse. Why did we do this? Who came up with this bad idea? It seems like we just made things worse. We didn't just make things worse, we actually made things way better for developers, no code duplication. Developer experience is a really nice input into user experience. I'm not saying that that is all bad, because we can't ship faster, because we don't have to worry about this code duplication, which is a major problem. For a lot of apps, that user experience for that first load is a really big problem. I understand, for some apps, this is behind a login screen or something, they already paid $100,000 to be able to use our app, this enterprise customer. They don't care if it's going to take another second. They don't care if it's janky and stuff. First of all, you're wrong. They do care. Second of all, what if we could just have both? What if we could have a really nice user experience and a really nice developer experience? That's foreshadowing.

Let's talk about the cons. The problem is, other than the code duplication, we still have the rest of the PEMPA cons. We also have a bigger bundle size. Waterfall, you're going to have a waterfall. You can't avoid network waterfall. That's just the thing. Do you want a cascading waterfall, or do you want Niagara Falls? You want Niagara Falls, but because of the fact that we have to get code to know what data we need, and then we know more code, and then we know more data, and then finally, we can render our images, that is not a good thing. That's what causes the content layout shift. Our runtime performance is also bad. I know that the phone that I've got in my hand right now is amazing and can do some really awesome things, but it still struggles to render basic websites sometimes. There's actual measurable harm done by this runtime performance. Then we had to start thinking about state management. We had to think about that a little bit with PEMPAs in some cases. Here with SPAs, you absolutely have to think about it. State management can be an enormous pain. In fact, just look at npm, and you see hundreds of npm modules trying to solve this problem.

Progressively Enhanced Single Page Apps (PESPA)

Clearly, it's a big problem. This is why I think we should be ready for the web's next transition. I am so ready for this. I'm screaming in your face excited ready. That's how ready I am. The web's next transition is progressively enhanced single page apps. We went MPA to PEMPA to SPA to PESPA. There you go. Let's look at this. This architecture says that the routing logic, data fetching, data mutation, and rendering is shared across the network boundary. It's the same code. We basically said, let's back up, we'll pretend SPAs didn't exist, and we're still at the PEMPA stage, how can we solve the code duplication problem without deleting our baseline of a functional app? The way that you solve the code duplication problem is by using the same code on both sides. That's it. That's the entire idea behind the architecture. A document request, looks exactly like a PEMPA. Exactly. It's the same thing. PESPA client-side navigation is very similar to a PEMPA, except now our routing layers are a lot more interested in our data fetching. The routing on the client is going to be calling into our data fetching. It has a little data fetching code in the client that will communicate to its corresponding data fetching code on the server, which will interact with persistence and come back, send it to the router, and the router says, "I've got my data, let me send that off to the rendering."

Then inline mutation requests, actually very similar. We have the router, talks to the data mutation code, that talks to its corresponding data mutation code on the server, that talks to persistence, comes back, and the router says, "I know that mutation has happened. Let me go revalidate the data that's on the page, with the data fetching logic." It goes to persistence, comes back, and then we can render. What's really cool, that's the inline mutation request, here's the redirect mutation request. It's exactly the same. This is actually really great, because it means we get a simple mental model, whether you're doing an inline mutation with a Twitter favorite, or you're doing a redirect mutation with creating a new GitHub Issue, you get the same mental model. The mental model you get feels like an MPA. That simple mental model we had over a decade ago, when everybody was just building these really simple apps, we get that mental model. We also get the power of a SPA. We don't have full page refreshes. We get our UI feedback control. We also get the browser emulation. Like I said, the prevent default was a problem with PEMPAs, but with the PESPA, your framework that you're using that implements this architecture is going to pretend that it is a browser. That's what gives you this simple mental model, is it emulates the browser, behaves the same way as if the browser were handling the link clicks and the form submissions. This is great.

We don't have code duplication, because this code is the exact same, the rendering code, the data fetching code, the action code. In fact, with the PESPA architecture, most of the code that you write is all just going to be on the server, and then the framework is going to be responsible for calling that server-side code. You don't even have to worry about that part. That applies to validation as well. If you do have validation, you want to run on the client, because you just want it to run faster or something. That's very easy to share as well, because it's going to be the same language. PESPA architecture does not necessitate TypeScript or JavaScript, you can use other languages. Other frameworks are trying to implement the same PESPA architecture outside of the JavaScript ecosystem, outside of Node, and the V8 isolates and all that. Personally, I'm a big fan of TypeScript, so I'm going to use that. I think that you're in a much more powerful position when you're just embracing the web platform language.

We also have reduced client-side JavaScript, because again, we're pushing a lot of our code, our loaders and actions and all of that stuff into the server side. We don't have to worry about waterfalls. Of course, there is a waterfall, but it's this, it's like Niagara Falls waterfall. Because the way that it is structured means that we're server rendering, so we know all of the assets and everything that we need from the very beginning. The browser can just go get all of that. Really awesome. Then, no application state management because we have the same mental model, we're progressively enhancing. By nature of progressively enhancing, we cannot have an application state management tool. That just doesn't work in the world of progressive enhancement, because you need to be able to work either way with or without the client-side JavaScript. No, you don't have application state management. You do have some client state that you need to worry about sometimes, but application state management, not a thing.

Cons is an interesting one, because when you start dating somebody new, for example, you probably notice a lot of their really bright points, their positive things. That's what attracted you to them in the first place. You might have noticed a couple things that you're like iffy about, but you look beyond those, because you just are curious enough to get to know them a little bit better. Then, over time, you see that they do in fact have weak points. You're like, that is something I don't like so much. You look past those, because you also have weak points, and they're doing the same for you. We do have cons and we'll probably discover more. Let's talk about some of those cons. First, requires servers. You're going to need to worry about servers again. When I started the transition into SPAs, I loved that I could just make a static build, and put those files in an S3 bucket and call it a day. The reason that I love that is because I didn't like managing servers. I have good news for you. We have really nice managed server services these days. Even if you want a long running server, you don't have to embrace the serverless thing. If you want to, you can, totally. PESPA works perfectly well in a serverless architecture. Even long running servers can be fully managed for us these days. It's amazing. I personally use, and I think it's amazing. You do have to worry about servers. There's a little bit of that. Server cost. This one I actually don't get, because if you're at the point where you fit outside of the free tier of a service, then you're probably making money on that, or hopefully you are. You should be able to afford it. Cloudflare Workers, you get like a million requests for 15 cents. There is a cost for sure, absolutely. Universal code, so you have to think about making sure that your code runs on the server, because you're going to server render. You're going to figure out a way to make this code that runs on the client also run on the server. It is possible, of course, to say, this is not going to render on the server, I'll just make sure that it only renders on the client. That's actually not very difficult, either. It is another consideration and concern that I've heard. Absolutely, we will discover more problems.

PESPA Implementation: Remix

I want to talk about a specific PESPA implementation. What's really cool is that actually, in the course of the last year and a half, a lot of frameworks have actually started adopting a lot of these architectural characteristics. The one that really has been leading the charge in my mind is Remix. I was a co-founder of Remix, and I left the company to pursue Epic Web Dev. Remix has been pushing this really hard, and it's phenomenal. Let's look at a specific example in Remix. Here, we start with our loader. This is all a single file. You've got your server code and your UI code in the same file. The UI code runs both on the server and the client. The server code only runs on the server. Loaders and actions are only on the server. Your loader takes a request. This is a web fetch request. The more you learn about Remix, the better you get at the web, because it just uses so many of the web primitives, which I think is awesome. More frameworks are adopting this characteristic as well, which I think is great. Inside of here, you can do any async thing that you need to to load data. Here, we're just getting some projects, that could be making a fetch request to a downstream service that serves that data. Maybe you're using Remix as a backend for your frontend type of thing. We get our projects. Then, in the UI code, we call this useLoaderData hook. This is React code. Remix in the future very definitely will support other UI frameworks. If you're not super into React, then, yes, you don't have to use React to get a PESPA architecture. I should say also that if you don't like React, the things you don't like about it are probably not problems when you're using Remix. Give it a try, believe me. Anyway, we get the project loader data. Notice there's no like, if is loading, show spinner, or whatever, any of that nonsense, because you wouldn't do that in here. If you're navigating between pages, you would show a little spinner in place for whatever link you clicked on, or something like that. If you do want to show some skeleton UI, there are ways to do this with the really awesome streaming API support that Remix has with the defer API. You don't have to. In any case, what's cool about this is that this is a happy path component. You're just thinking about the happy path here. We have our projects. We map over those. We link to them, that's awesome.

Then we have the opposite side of this is mutations. We have our form. This actually will just render to a regular form. If the JavaScript doesn't load in time before the user submits this form, then the regular form behavior of the browser will kick in and Remix's backend actually understands this and it will behave exactly the same. Because again, Remix is just a browser emulator. It doesn't matter whether the browser is doing its thing, or Remix is doing its thing, it's going to behave the same as far as you are concerned. That just makes it so much easier to work with. We've got this form. What's cool about this is we have this useNavigation hook that will tell us when this form is being submitted. We can show some little bit of progressive enhancement in here. This is just like a tiny bit of code that is added so that we can make the experience better. That is progressive enhancement to a T. That's what progressive enhancement is. While that is being displayed, the action is going to be called with the request, and we can get the request form data just by calling request.formdata. This is stuff you can learn about on the Mozilla Developer Network, the web docs of the world. The request object is the standard request object, so if you want to learn, how do I get the form data from a request object, here it is. This is the way the web standard works. I just think that's phenomenal. Then we can validate this project, just to make sure that things are correct, and send an error back. Then we can use that error to display that error UI. Otherwise, we just send that redirect.

I love this. I've been working on a project for two-and-a-half years now. I'm 1800 commits in, and it's phenomenal. It's an amazing experience. What's cool about this is you can work on a project, and then decide one day, I wonder, should I just measure this to see how its performance is. That's what Alex did. This is on his authenticated, most expensive page with no caching at all, except for static files. This is the score he got. I think that's fantastic. I'm going to get a better developer experience and a better user experience at the same time? Sign me up, please.


The last link here is a link to a GitHub repo where I've implemented the same app in all of these architectures, so you can follow the code and see how things differ as you navigate around. It's very cool. All of the stuff that I've talked about is in the blog post, You can take a look, dive a little deeper in some of the charts and things.


See more presentations with transcripts


Recorded at:

Jan 31, 2024