BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Pick Your Region: Earth; Cloudflare Workers

Pick Your Region: Earth; Cloudflare Workers

Bookmarks
50:37

Summary

Ashley Williams discusses what affects the performance of web applications and how Cloudflare helps with that.

Bio

Ashley Williams works on the Rust Programming Language and WebAssembly for Mozilla. Previously, she wrote and maintained Rust and Node.js back-end services at NPM, Inc. She is a Rust core team member and leads the Rust community team. She founded NodeTogether educational Initiative and is a member of the Node.js Board of Directors.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Williams: I have named my talk "Choose Your Region: Earth." Just to start out, how many people here have deployed some sort of application or service and had to pick a region before? Shout out your regions. Who's got a region they like?

Participant 1: U.S. East.

Williams: I'm hearing a lot of U.S. there. If you take anything away from the stock, what I hope you take away is that I think that choosing a region across Earth should be a thing of the past. I'm going to explain how you can do that today with Cloudflare Workers.

My name is Ashley Williams. I am known as ag_dubs on the internet. If you follow me on Twitter, I'm sorry. If you have a question, you can tweet it at me, and I will totally answer it. I do all sorts of things. I'm talking today about serverless edge, but I'm also a giant programming language nerd who has a fascination with package management. I do all sorts of work for these organizations here. Today, I'm here to talk to you about the work I do at Cloudflare.

We're going to have a couple of themes for the talk today. The first one is performance. The next thing is accessibility. Accessibility is a very interesting topic. I'm really glad that a lot more people are focusing on it right now, particularly in web dev, like Aria, labels, screen readers, but I want to talk about accessibility from an infrastructure point of view. When all of you were like, "My favorite region is U.S.," how many people here have done an NPM install in Brazil? It sucks so bad. That is an accessibility problem. This is why I think accessibility matters for infrastructure as well.

Finally, infrastructure. How many people here read the description for this track on edge? If you didn't read it, the summary is like WTF, edge. We keep saying that word, but do we know what it means? I'm going to hop in a little bit today and describe what I mean when I say edge, but I do think one of the toughest things for edge adoption today is the fact that it can mean a fair number of things. In my case, I'm going to be talking about the edge of a set of cloud servers. We can call that the infrastructure edge. As we've seen throughout this track today, there's also the device edge and all sorts of other things.

Additionally, you may note that the serverless edge offering that I'll be talking about is called Cloudflare Workers, and workers is also quite an overloaded term in the web today with web workers, service workers, worklets. Cloudflare Workers is its own thing, and we'll get into that. Naming is hard.

How the Internet Works?

What I want to start with is going right back to the basics of how the internet works. When we think about how the internet works today, I also want to think about how the internet could work. Something that we talk about at Cloudflare is like the internet is super awesome. When it was originally built and designed, it was absolutely not built or designed for the things that we are doing with it today. A really interesting question to ask yourself is, if you could change how the internet works, how would you go about doing that? I think that we take the internet for granted. We don't really ask ourselves this question a lot, and so I'm going to ask it today because I think it's a very interesting one.

One of the things to note about the infrastructure, offering them and talking about is that we're very focused on JavaScript. I think that it's very interesting to track where JavaScript has gone. In a previous talk I actually gave at QCon, I talked a lot more about how JavaScript is developed. If we take a look at how things went through the internet and JavaScript, we start with the first website in 1991, and then we get JavaScript in 1995.

I don't know if you are programming language nerds, I don't know what was in the water, but 1995 was like the programming language year. They all happened. JavaScript also happened, and that was super cool. Here, we're just looking at browser technologies. This is JavaScript back in the day. It was really like, "Can we add like a little bit of interaction to a website, not at all, like a whole application platform?"

If we expand this out a little further, what we can see is we start with JavaScript in 1995, and then we get Node.js in 2009. Between that, we see single-page applications appearing in 2003. Then very quickly, we see Google Maps, probably one of the most important applications for driving JavaScript-driven client-side applications. Google Maps was one of the first things that was, "we've got to do this." What you can tell from that is Google Maps comes out in 2004, and the V8 engine comes out in 2008. It turns out when they launched Google Maps for the first time, it was super slow. What they needed to do was really supercharge these JavaScript engines, and that really motivated this development of the V8 engine. I think that we will be talking about quite a bit today.

One of the nice things to note though is that speed of computation in the browser has been going way up. As much as we love to hate JavaScript and can often think of client-side applications as being bloated or bulky, the things that JavaScript engines and browsers are doing today are absolutely awesome. They're super geared towards performance.

To take a step back, as much as I love JavaScript engines and think that they are fantastic, they have fantastic optimizations, when we think about the growth of the web, and what we've seen with JavaScript applications, we need to ask ourselves, "How much is this revolution of the web costing us?" I'm going to focus on this idea of accessibility.

Accessability

Addy Osmani, who works for Google, he has a lot of these talks, has said, "The web is bloated by user 'experience.'" What he talks about here is truly that JavaScript player that's adding all of this interactivity on top of a web application. When he talks about this, he's talking about these types of graphs. How many people here know about the HTTP Archive? For those who didn't, today you learned. There's fantastic amounts of data that are analysing what's happening on the web, how we're coding the web, and what that web experience looks like.

Let's take a look at this graph. We can see on the y-axis, we have JavaScript bytes and kilobytes, and then across the x-axis, we have time. What you might note is that that number is going up. The amount of JavaScript that we are asking folks to download when they're visiting websites continues to trend upward, which is a little bit concerning, because you may also note that the quality of devices is actually going down. We're seeing that there's a proliferation of lower-end devices that have lower processing ability, lower memory, yet the amount of JavaScript that we are shipping to each one of those devices is going way up. I think this trend is really concerning.

In general, on average, a mobile site will take around nine seconds to load. That's unacceptable. It's just completely unacceptable. How many people here would wait nine seconds for a website?

Participant 2: Depends on the website.

Williams: Depends on the website, fair. If that's the only experience that you have, you probably will wait. It's a human right to be able to access the internet. The fact that you would have to do it at this type of rate, I personally find unacceptable. I'm very motivated by this to try and build experiences on the web to allow folks to have much better experiences than having to wait nine seconds to say check your bank account balance. That's pretty rough.

Infrastructure

As I said before, I am a systems engineer and also product manager and engineering manager at what I like to call the big, orange cloud company, which is actually a little ambiguous. I don't work at SoundCloud, which is also an orange cloud, but I work at this company called Cloudflare. In general, a lot of people know Cloudflare as a CDN company. If I say that, I get in big trouble, so we scratch that from the video. I like to consider Cloudflare as an infrastructure company. Does anybody here use Cloudflare?

We have a lot of free offerings. I think probably one of the most well-known offerings is our DDoS protection. If you want your website to stay on the web, and you have made a set of people who are good at writing bots very angry with you, we're here to help. We're also doing a bunch of work about thinking about how we can rebuild the internet for how it's being used today.

I'm not good at DNS. It's tough, but Cloudflare does a lot of it. Part of the reason I joined Cloudflare was because I want to learn a little bit more about exactly how the internet works. What I'd say is there are all of these layers. People talk about layer three, layer four, level seven. That's just not the engineer I am. Cloudflare does this stuff, and they do a very good job with it, I will talk about infrastructure, and in particular, hardware.

If I'm being really spicy, I would like to argue that Cloudflare is actually a hardware company. That is because our number one competitive asset is the fact that we have data centers all over the world. This is an old map of where we have all of our data centers, and the number is now probably around 190. We continue to add data centers all over the place. Part of the cool thing about these data centers is that each and every one of these data centers runs all of the same code. We'll be talking about pushing applications to this "edge" and talking about cache on this edge.

When I talk about the edge, I mean these purple dots. It was so interesting here we can see what an edge means is a little bit strange, but these data centers are what I mean when I say the edge. Edge is something that a lot of people don't entirely understand. Hopefully, by the end of this presentation, you may have a slightly better idea.

When I started thinking about the internet, and I've taught lots of beginner classes for programmers and stuff, I like to start with this client-server relationship. Are we all on the team we understand what we mean by a client and a server? If we're not, what I can do is I'm going to do a demo about pizza. Who here likes pizza? I am originally from New York City, and I recently moved to Texas. While Texas has a lovely food scene, what it does lack is good pizza. We're going to be talking about long-distance pizza delivery. To a certain extent, it's about pizza accessibility.

To give everybody a little bit of a key, I'm going to be using some emoji to help understand this. We have this chef, and the chef is going to be our JavaScript program. We have this little bang emoji, and that's going to be the chef cooking. that's going to be your program executing. Then your programs generated output, maybe some generated HTML is going to be our pizza. Then the end-user that wants this pizza is going to be our superhero. Finally, we're going to have what we could call a pop or a point of presence, which is very jargony CDN folk words. You can just understand it's a computer, or it's a spot for a computer to be. You can also sometimes call this a cache node.

Let's look at first client-side rendering through this lens of pizza delivery. Let's say we have our JavaScript application, and we've chosen U.S. New York to host it in. Meanwhile, we have somebody in Australia who would really like some pizza. Here, what will have to happen is for client-side rendering, it is the equivalent of taking that chef who is currently hosted in New York, flying them to Australia to that person's house, where they then set up a pizza kitchen and create pizza. How many people here have a React application or an Angular application? That is the equivalent of sending an entire pizza kitchen to somebody's device and having them cook pizza there.

That's pretty intense. That's relatively invasive. As somebody who came from New York, I don't necessarily think that my apartment could really accommodate somebody cooking a whole pizza inside of it.

Most people who have these sorts of applications are going to take advantage of some sort of cache. Here, if we do client-side plus cache, we can send our chef over to this pop, maybe closer and around Singapore. From there, we still do end up having to send that chef to that person's device. However, the chef's journey is not as far. We're still sending a whole chef to cook that pizza.

Server-side rendering also happens. Tens years ago at JSConfEU, Ryan Dahl announced Node.js. The whole point of Node.js was this idea of being able to unify web development to a single language across both the client and the server. This meant that Node.js made the server more accessible to JavaScript developers. Now, in JavaScript, we could do something like this. With server-side rendering, we don't have to send the entire chef to someone's device. Instead, they can cook their pizza on the server, and then they just send that pizza right on over. Definitely significantly less invasive, but the pizza is now traveling across several oceans.

Similar to using client-side application, you probably are using some sort of cache if you're doing a server-side application. Here, the chef can still stay in New York and make their pizza, and they'll be able to now send it to somewhere like Singapore. Then whenever it's requested, that person will just be fetching it from Singapore. What we can note here is that there's a relatively good chance that that pizza is going to be cold. We've sent it, it was pre-cooked. Getting really up-to-date stuff is going to be tough.

That's client, that's server, some of you were pretty familiar with this. What I want to introduce now is this idea of the edge. Cloudflare has all of these points of presence where they cache all sorts of different things for you. You get a cache, you can cache all your static assets. However, we had more space on these data centers than we knew what to do it. Cloudflare asked, "We got all these baskets. What should we do with all these baskets?" What they decided to do is, "We have all of these points of presence all over the world, we've got a little extra space. Now, instead of just cacheing static assets, what if we gave people space to write and run programs there?"

That is what the edge is. If we take a look at edge side rendering, here we can see that the chef actually gets to cook in that basket. Before, the chef was always cooking in some other location, and either you were sending the chef to the basket, and then from the basket to the person's house, or the chef was cooking in their own spot and then sending the pizza over.

Here, what we're saying is that, "The person should not be cooking pizza in New York, they also should not be cooking pizza in your apartment, but maybe they should just cook pizza really close to your apartment." Here, we can see they make the pizza here, and then it's just a really short trip to send that right to the user in Australia. Here, that's super cool that this person is able to cook that pizza right here and Singapore.

What's really happening with edge side rendering is that with the Cloudflare edge, you're now actually able to set up 194 Pizza shops all over the world so that people who want pizza can get it cooked super close to where they live and send to them right away. This is a visualization of how the Cloudflare Workers edge compute platform works.

How Fast?

I've showed you this. Maybe you're, "Ok, this is a lot of moving emojis, and the pizza metaphor is maybe a little dense." Let's figure out why would we want to do this? What I told you is that what I care about is speed. We want to figure out how fast this is. There is something called the serverless benchmark. You can check this out today. This is a screenshot I took a couple of weeks ago. What we can see here is we're seeing the average cold start times. This is compared to AWS Lambda, Google Cloud, IBM, Azure, and Cloudflare Workers. The average cold start time is incredibly small.

As a result, not only are these things happening very close to your user, you can execute any type of computation incredibly close to your user. They do not have to wait very long for that computation to start, and then have it sent right back to them. Taking a look at these numbers, one of the things you may ask yourself, or any programmer will ask is, "You're going so fast. You must be somehow doing less. The way you actually improve speed stuff is often to do less. You can't super optimize a ton of things." What I'm going to do now is dive into a little bit of the constraints that we were dealing with when we were thinking about how we could put compute on our edge and talk about the very unique architecture that we've come up with to be able to achieve cold start times like this.

Before I dive quickly into that, just a couple of benchmarks. I've taken a simple GitHub Pages site, which we'll actually work with a little bit later and put it into a worker instead. These are just a couple of locations around the world. In Cape Town, GitHub Pages would be around 600 milliseconds and workers 143. The story of these numbers is, big numbers are big and small numbers are small. Workers are incredibly fast, particularly in these international locations. Now, let's get into, how does it even...?

How Does It Even?

We talked about all of these baskets, and so we had this extra space. It wasn't like we were necessarily defining this architecture from a blank slate. It was more, "We've got these scraps, can you do something cool with them?" We were, "Ok, awesome." With that neat gift came a tonne of constraints. One of the first things we had to consider was scalability. For us, scalability could really mean two things. One could mean traffic. Requests on Cloudflare's network is incredibly easy, so scaling request is not super hard at all. Our network is incredibly huge, it continues to grow, and so we can route traffic all over the internet very simply. We're not too worried about numbers of requests.

However, and you may remember this before or from my demonstration, those baskets are all the same. Scaling, for us, is actually much harder when you think about scaling tenants or applications, how many applications we're storing on our edge, because every single one of our edge nodes in our network have to contain everybody's apps because we keep each one of those nodes homogenous. Every single one has all of the same software written on it. We don't differentiate. It is true that some locations are larger than others, particularly when we're reaching out to new locations. Trying to figure out how we can get everybody to, one, deploy in our platform, but, two, fit on that edge, is actually really tricky.

We're looking for like 100X efficiency situation, which is really tough. There's a couple of angles that we had to think about. The first one was code footprint. When you think about a traditional infrastructure, how many people here use a VM or a container? It's a very classic offering. When thinking about trying to host some type of arbitrary code, you might reach originally for something like a VM or a container. However, given our limited space, we needed something that was less than one megabyte. We needed something way smaller, and the overhead for a VM could be like 10 Gigs, or a container is also very large. We needed something smaller. What the heck could that be? In addition to needing that to be smaller, we also needed something that was going to have very low memory usage.

If we hadn't already said that a VM or a container wouldn't work for a situation, this would completely knock it out of the running. We needed something with incredibly low memory usage, and just out of the box, table stakes, VMs, and containers were simply not going to fit our needs. This is because the edge is not a large place. Even if we think about the infrastructure edge, or even potentially something like the device edge, they share this very similar situation of being a deeply constrained environment.

The other thing that we realized that we were going to need was we were going to need a lot of context switching. We wanted to do a serverless offering, so that would mean that instead of renting out like whole spot on the edge, you rent out just the time that you need to run your application. We needed to be able to spin things up very quickly, run them, and then shut them down. Context switching in a VM, in a container can take a very long time, so we needed something that could be really quick.

Each server only has to run the requests local to it. Your applications are all over the world, but you only need to run your application in New York City when someone from New York City asks for it. Then similarly in South Africa, only need to run it there. We're constantly spinning up and spinning down these pits of computation. That is really tough to do in a VM or a container efficiently.

Then finally, because we need to do all of this context switching, we desperately need super short startup time. If you look at something like a traditional offering, like a VM or a container, you're looking at 10 seconds, 500 milliseconds. We needed less than five milliseconds. What the heck was going to solve this problem? Now, obviously, I'm up here talking on stage, so I'm not going to be, "And there was no solution," That'd be an awful talk. We did figure something out. Part of the reason we needed to be able to have that speed again, kick people out, start them back up quickly.

This use case, it turned out, wasn't that weird. It does sound like we have some pretty hard constraints, but other situations also needed something like this. One of the things would be an API. If you're accepting any type of third party code, you're going to be going to run client code directly on a server. You have similar constraints to this. Additionally, Big Data processing also shares these constraints.

Sometimes when your data is "really big," you want to be able to run code where that data lives, and you're going to run into a similar environment. Then finally, and most interestingly, web browsers are very similar to this use case where you need to run code from visited sites. You need to be able to do it quickly because there's a user's sitting right there. Who knows what type of machine you're on, but you can't be taking up too much memory. It was a very similar problem.

What we decided, at the end of the day, and how I started this talk is that web browsers are super awesome. Server-side technologies like VMs and containers were just really too slow for our use case. What we've realized is that we have been overlooking the power of the "containers," that power of the client-side of tech, AKA browser engines.

If you think about a browser, browsers are optimized for all sorts of things that would work perfectly for us. They need to have very small downloads, so they need to be very small. They need to start up quick. People are going to have all sorts of tabs. You have to remember, iframes, you may have around, if you're like me, 100 tabs open, but then each one of those tabs has at least like three Facebook Lite iframes stuck in there. You're really running so many processes at a certain time. Then finally, what you really need in a browser is secure isolation. That is because web browsers have been the most hostile security environment for quite some time.

This is a quote from Kenton Varda, who's an engineer on my team, who was the original author of the architecture I'm about to explain to you. This is a fantastic technology called V8, and I like to troll all my V8 friends by pointing it out like this. This is not the real logo. It is this, but V8 juice is also delicious. Within the V8 JavaScript engine, there is a class, and I mean class in the programming construct sense, called a v8::isolate. This is the concept that fundamentally drives the entire architecture of the Cloudflare Workers platform.

What I like to say is, the way we got us to work with all of these constraints we had was to create an architecture that was just a little bit more communist. If you don't share those politics, you should still like the platform. At the end of the day, what we figured out was that we needed to be sharing a lot more things than a lot of the traditional architectures allowed you to. If we take a look for a second at this diagram, we're comparing VMs, containers, and isolates.

Here we have the set of things that you end up having on a server. You have some type of hardware, potentially a virtualized hardware, an operating system, a Language Runtime, some libraries and an application. Most of the time, when you are deploying an application, how many people need to have that operating system access, you're going to be doing something with it important, it really matters to you? How many people are doing super Ubuntu-specific stuff on their deploys? Null. Having that available to you and having that control, and having that be just yours and not share it with something else is completely unnecessary. It's a type of overhead that you simply don't need.

Then similarly, when you take a look at things like containers, containers allow you to share that operating system level, but now you're having your very own Language Runtime. How many people are so dependent and needs to have their own independent Language Runtime to run their application? There's some, it's true, but there's not a lot. Fundamentally, what isolates do is they say, "You are going to bring your application, you're going to bring maybe some uncommon libraries, something weird, some NPM package that you wrote yourself, but then we're just going to give you web platform APIs, a JavaScript runtime, an operating system, and a hardware for free, but the trick is you can't edit those. Those are shared amongst all the applications on the server that this is running."

This is a diagram, comparing a virtual machine and our isolate model. What you'll see here is, at the end of the day, what we realized the constraints that we had were that, even spinning up a new process for each bit of user code was just too much overhead. What we've been able to do with V8 isolates is be able to run bunches of different pieces of user code for a single process instead of having a one-to-one mapping of an application to a process. How many people are starting to get nervous about this architecture? Yes, that's fair. We'll talk about that.

Before I dive into like, "Cool, you did this, but should you have?" to explain a little bit more about how this works, we provide a JS runtime with web APIs to you. It's built on top of the ServiceWorker API and the fetch API, so we tried very hard to be standards-based. These are all based on the standardized open web platform APIs.

To give a sense of just how this looks, this is the online editor we have. You don't have to use the online editor, you can use your own thing. You can see here we have this classic handle request function, which takes a request, and you can set up an event listener to respond with what you want to. At the end of day, this is like the classic "Hello World!" This is very close to an application that you can run. You can add all different types of event listeners and do all sorts of different activities or do some of the things today, but this is generally what the application can look like.

One of the other cool things, and because it is a pet interest of mine, I'll just share. What we did is took basically the Chrome browser and put it on the server, it's very JavaScript based. If JavaScript is not your type of fun, we get web assembly out of the box because it's supported by the V8 engine. If you want to write Rust and have it be a Cloudflare Worker, you can because it compiles well to web assembly. I'll do a demo of that in a second.

You might be sort of smelling right now that this is actually a lot like an operating system. If you're curious about diving into all of the interesting ways that we've turned V8 into an operating system, there's an excellent talk called "Fine-Grained Sandboxing with V8 isolates," by Kenton Varda, given at QCon London, so shout out to QCon.

But Is This a Good Idea?

I showed you the architecture, and everyone was, "You're running multiple apps within a single process, I have security concerns." The first thing I can share is we mentioned that this is based on browser technology, and browsers are basically the harshest security environment that exists on earth today. If you feel safe browsing on the web, and opening up different tabs, and they're not going to access your computer and things, we get all of that type of sandboxing work that happens. You can note that the class name of the architecture we're using is a v8::isolate for isolation. There's all sorts of other concerns that can happen in architecture like this, particularly when you have bunches of user code all shared on a single process.

There's a Spectre haunting this architecture. Sorry, it's a bad pun. How many people here are familiar with the Spectre and meltdown attacks? A lot of people are very concerned about this. I think that it is a fascinating attack vector, and it's something that, as users of the V8 engine, we have to worry about, the V8 team has to worry about. We've made a couple of mitigations to make sure that this can't happen. It would be a whole talk to dive into what Spectre means.

One of the things you can think about with Spectre is that you want to basically eliminate any type of timing because timing is going to drive these attacks. We have not given you the full V8 engine when you're using a worker. We've removed all local timers so you cannot use those. If you call date.now, you'll just keep getting the same time. We've also removed local concurrency because local concurrency is basically a timer in disguise.

By removing these, we've really reduced the attack surface for Spectre. The best thing that we think about our architecture is that we have a ton of observation and monitoring. When we see things acting strange, we have a ton of freedom to reschedule those things and we are able to take weirdly behaving user workers and isolate them to a process so that we can watch them across our ecosystem. We've done a lot of work to mitigate the types of effects that you might be afraid of seeing.

Developer Experience

Let's talk about how you can use it. I am an engineer at Cloudflare. One of the first things I did when I joined was I created a team called developer experience. If what I care about is accessibility, and the reason I want you to use this is so that you can build web applications that are more accessible, it would be really silly of me to then just build something that was absolutely impossible to use. We've invested a lot of work in making this something that is pleasant to use, so I care a ton about developer experience.

We have a Seelye tool now called Wrangler. How many people recognize that crab? For folks who don't, the unofficial mascot of Rust is Ferris the crab, Ferris meaning over pertaining to iron. It is a crab because the collective noun for Rust programmers is cRustaceans or Rustations based on cRustacean. The memes will continue until morale improves. This is the way you can start working with Cloudflare Workers today.

I named it Wrangler because I had just moved to Texas and everybody was talking about workers, and we always talked, "We got to wrangle the workers," and I was, "That's real Texas flavor. I'm going to call it that." This picture was also tweeted in my timeline around that same time. Instead of saying yee-haw, it said yee-claw for the crab. If you want to download this tool, it's free. It's on NPM. You can also download it using Cargo. We use the Rust mascot because it is written in Rust, because I love Rust. This is a really fast download. If you install it with Cargo, you have to wait for it to compile. This just fetches a precompiled binary from GitHub releases. It's really nice.

Just to give you a small tour before I hop into demos, this is what it would take to do a "Hello World!" What Cloudflare did is we often allow you to registe what we call a "zone," which means a domain name - I'd never heard the word zone before I joined Cloudflare - and that would be your origin. Here, we release something called the workers.dev platform, which means if you don't have a domain name, you don't want to have some sort of origin server.

You just want to write cool apps that only exist on the edge, and you don't want to have a domain name, you can just do that. You can register one with Wrangler subdomain. Here's an example where we're running generate, which is a tool which will pull down templates for you, and I'll demo that in a second, reserving the subdomain world. Then we're just running wrangler publish, and this is publishing a "Hello World!" to that URL. We'll hop into that in a second. I show you this just so that you can understand how simple we've tried to make it.

Demo

I have a couple of homespun demos here, but let's start as if I'm just starting. I have installed Wrangler, I'll type it like that. We have completely revamped our docs and we have this thing called a Template Gallery. I used to work at NPM, and I'm obsessed with package management. I just turned an example section of docs into a programmable searchable package manager. Here we have the Template Gallery, and what I'm going to do is I'm just going to grab this URL here. It has the command in it, and we're just going to generate an app. I run that.

You can create your own template for this. It's just any GitHub repo. You can just pass a GitHub URL to it, and it will generate, and it does some substitution for you. What we can see here is we've got something called my-app. I'll go into my-app. This is generally what it looks like. Everyone has a manifest and nobody likes any of the languages manifesto written in. We picked TOML because we're Rust developers. How many people would have preferred to YAML? JSON? They're all bad.

If we take a look at the wrangler.toml here, this is just generally what it looks like. I'm going to have to throw my account ID in there, but then we should be good to go. Let me do that. Show everybody my live chat open. You can see that I was testing this all out. Let's go to the dash. I'll grab my account ID, and let's paste that in there. I should be good to go. We have our worker in here, and this is just going to respond with the words "hello worker." I'll just run wrangler preview, and I'll pop that up. Here we just have something that says, "Hello worker!"

I could have run Hello wrangler preview watching, and it would do live reload, but what I'll also just show is I'll run wrangler publish now. Now my app is available on all 194 of those data centers in that short of a time. You can do this obviously. There's a "Hello World!" here, but you can do this with all sorts of complex applications. What I'll show you now is I can demo you doing this with a static site. How many people here have some sort of static site element of the product that they work on? That's less than I expect. I've often found that everyone's got one if you have like some weird side project, probably a GitHub Pages site. We'll close out of that. What I'll do is I'll run wrangler generate --site. This is just going to make a site for me right now.

If I go into that directory in worker, we'll see that it's scaffolded a bunch of things for me. I'm going to pop open my wrangler.toml and throw my account ID in there, and I should be ready to go. I can run wrangler preview. Here's my static site, and then I can also just run wrangler publish. Even you on your phone can now check this site out right now. Now, this is a pretty boring site. Why don't we try something that's a little bit more complicated? I have pre-baked the workers marketing site so you can see that deployed here. We dogfood the heck out of everything we build, so both our documentation, and our marketing sites actually all run using static sites that are served from a worker. We will check that out right now.

If I go into workers.cloudflare.com, you can see in here we've got some node modules. We've got a whole bunch of CSS, we've got some vendored stuff, and we've got a wrangler.toml. I've already put my account ID in there, but let's try this out. I can just run wrangler publish. This is going to push this up. We can see we have the full site right here. This is not workers.cloudflare.com, it's at my weird domain. The real question is I've been talking about performance. This is actually a very tenuous and dangerous thing to do. How many people here use the tool called Lighthouse? if you're not familiar with it, you should be. It is a free tool from Google that is absolutely excellent at evaluating the performance of your website.

What I'm going to do is I'm going to run Lighthouse. It's a Chrome extension, so you can see I have this right here. I will generate the report. It's going to do all sorts of stuff. The average Lighthouse score is, as of yesterday, when I was at Chrome Dev Summit, they said it was 48. This is out of the box, have done nothing, no special lazy loading. You can see here that we're not pre connecting to any sorts of origins. We're not doing any sort of fancy JavaScript, and just out of the box, a 98. That's pretty freaking awesome.

Anybody here want to say what their current Lighthouse score is? This is how you know 98 is very good. The scores are often very low, this can often plague people. By hosting your static content just right on the edge, you're able to make this incredibly fast. The awesome thing about this is there is no origin site here at all. There's no server that it is hitting. This is all just existing on those 194 data centers, which is pretty freaking awesome.

Last but not least, how many people here are aware of Jamstack? The way you can understand Jamstack, and perhaps that's another talk I should give some other time, JavaScript, APIs, MarkUp. It's a rebranding static site generators as apps. I have one here that is for our workers-docs. If I hop into workers-docs, we've got this in here too. We've got our wrangler.toml. You can see this is actually a Hugo site, so we've built this using Go. I can run wrangler publish here, and this will get published. Again, all over the world, and we can just run Generate report. If you've used Hugo, it doesn't really give you any type of performance optimizations running out of the box. Here we go, a 93. That's super awesome.

These are actually very high-performance static sites that give you a ton of freedom to build whatever you want. If you have just a directory of static assets that you want to use somewhere, you can now just deploy it to the edge and have it be incredibly close to your end-users. Now, finally, a lot of you may be saying, "If it's just a static site, you throw a CDN in front of it, isn't this just like a more complicated way of doing that?" The answer is yes. One of the cool things about doing it in this complicated way is that if you wanted to add a type of dynamicy to your app, you now have an availability to do so right there.

Inside the Cloudflare Workers JS API, we make available something called HTML Rewriter. You can fundamentally understand that as like we put jQuery in the browser, except now it's on the edge. You can be streaming HTML off of the edge, rewriting it, and sending it to your end-users, and, again, doing all of that computation on the edge. We've been blurring the lines between client and server, we can now also start blurring the lines between static and dynamic websites, and needing to add that dynamism does not have to happen on some sort of origin or server.

Wrap Up

Just to quickly wrap up, as I said, all the sites that we run are also run on our platform. If you're curious, we have made it dead easy to deploy your site if you use GitHub. You can set this up with a GitHub Action, and it will just automatically deploy to workers on a merged master or whatever you prefer. There is a free tier of this. If you want to try this out today, all of the things I also was just benchmarking, you'll note I just put my account ID in and used the workers.dev, that is all the free tier. Performance actually gets slightly better for reasons that are complicated to explain if you put it on an origin server. The free tier has excellent performance, and you get, I think, a million requests a week. It's really awesome. I would encourage you to try it. Doing this GitHub Action makes things incredibly easy.

As I said, you can use this with any sort of static site generator you like. These are just some of the benchmarks against some of the competitors in the static site world for our performance. Again, small numbers are very small, both in the U.S. and across the globe.

Please try at Workers Sites. You can get more information about this. I've been told that you should use the QR code, but you can also just go to this URL. I want to say go forth and build awesome things. Last but not least, what I do want to share is everybody in this room build some sort of product. This was like a web-programming-infrastructure-like talk, but I think we often confuse ourselves with what we are actually building or what our product is. One of the things I want to share is this is a visualization of something, an idea from Kathy Sierra, where the idea is that you have a person who's a customer, and here we have Mario, and then we have this fire flower, which is what a lot of people think their product is. At the end of the day, to believe that your product is the fire flower is actually a mistake.

What your product is actually an awesome person who can do rad things. As you think about picking technologies, one of the reasons I really love serverless, and edge, and developer tools that help get these things out of the way is that you should be focusing on making your users awesome fire Marios. You shouldn't have to be worrying about how you're going to deploy this application. Focus on the things that matter to you and go forth and make awesome fire Marios.

 

See more presentations with transcripts

 

Recorded at:

Mar 23, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT