BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Build Node.js APIs Using Serverless

Build Node.js APIs Using Serverless

Bookmarks
49:28

Summary

Simona Cotin talks about how to migrate an API of an existing app to Azure Functions, and how to use Visual Studio Code and the Azure Functions extension to speed up work.

Bio

Simona Cotin is Cloud Developer Advocate at Microsoft. She spends most of her time tinkering with JavaScript in the cloud and sharing her experience with other developers at events. She engages with the web community to help create a great developer experience with Azure, loves shipping code to production and has built network data analytics platforms using Angular, Typescript, React, and Node.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Cotin: I'm Simona Cotin and I work as a developer advocate for Microsoft. If any of you have questions about Azure, and JavaScript, and obviously GraphQL, and Serverless, do make sure to reach out. You can find me on Twitter @simona_cotin or by email, simona.cotin@microsoft.com.

As you can see, I have quite a treat for you because today we're not only going to talk about building web applications in APIs using Serverless, but we're also going to add GraphQL to the mix so that it's more interesting. As I was preparing this talk, I was like, "I love GraphQL, I love Serverless. I want to find a way to put them together. I have to figure out what did they have in common? What is it that they share?" And one of the very first things that came to my mind that they're actually they do share, is the fact that they're both on the hypetrain. Both Serverless and GraphQL, they're super popular at the moment. Probably there's one of you that's writing a community library and officers library, either for GraphQL or for Serverless.

In fact, if we think of GraphQL, when they first released it, they only released specification and the standard implementation. They released the JavaScript implementation, and that was back in 2015. They imagined that GraphQL probably is not going to become super popular. We're just going to keep using this JavaScript library for a while and maybe someone is going to start implementing this specification. Turns out that in a year, I think they had probably 10, and I'm just making up numbers right now because I don't have the metrics. But they basically conquered the world of web APIs and APIs in general. Now, GraphQL has implementations for JavaScript. It has implementations for java.net. Even Scala. I think the Scala one is actually my favorite because it's called Sangria. How many of you like Sangria? And there are tons of other implementations. So both GraphQL and Serverless, they shared the hypetrain. We absolutely love both of them.

This is a tweet from Nader Dabit where he says that his predictions for 2019 are that Serverless is going to continue to be super popular and so is GraphQL. It's going to continue to have more and more adoption. These are the two technologies that probably all of us should continue learning about. Third, we're probably going to start using more and more third-party services because instead of writing our own stuff, we shouldn't be reinventing the wheel really, we should maybe leverage other people's work.

Another thing that they do have in common is that they're both loved by JavaScript developers. If we think about Serverless, with Serverless, we build backend applications. What does that mean? We can build them with JavaScript. That means that as a JavaScript developer, as a frontend developer, we are actually empowered to go on the other side and implement our own logic and our own applications. The beauty of that and the beauty of Serverless, is that we don't have to take care. We don't have to focus on writing configuration. We don't have to know a lot about infrastructure. We don't have to know how we need to configure web servers; we just have to write code and that's the end of it.

We can reuse some of the skills that we use as JavaScript developers as frontend developers in order to power our backends. Serverless empowers us as a frontend developer. That's exactly what GraphQL does as well, because GraphQL gives us the flexibility as frontend developers to ask only for the data that we need, and it empowers us not to work anymore as much as we have to, to parse all of that data that we don't need out. That's just one tiny example. But basically, both GraphQL and Serverless empower us as JavaScript developers, and that's why we love them.

They also share a timeline. As I mentioned earlier, GraphQL has been released as a specification probably in 2015. Then Lambda has been published end of 2014. Even though the concept of Serverless has probably been used previously, and GraphQL has been used at Facebook before that, both of them came to be published around the same time. I think that's important because it gives us a sense of maybe a trend or of what is happening at a certain time, and there's some magic as well. That's what we're going to look into, find out which is the magic ingredient or the magic common thing that GraphQL and Serverless have.

GraphQL

Let's start GraphQL. When the Facebook team came up with GraphQL, that was early or maybe 2012. When they sat down and after they bet on the web, and they realized that actually, the web is not going to help them at that time. They need to rewrite their application to be a mobile application for their users. They looked at what they had to do, and they realized that they need to build something; a data-fetching API powerful enough to describe all of Facebook and I'll let that sink in. That's a lot, describing all of Facebook.

Their challenge really was that previously, they build this entire ecosystem, and they had really complex APIs that they built. They had only one client using those APIs, but as soon as they moved to creating mobile applications, they realized that those APIs, they're not perfect. They don't give them the data that they need to display in a mobile application because when we build applications for the web, and for mobile, we might think of them differently. Sometimes, for a web application, we might be able to send all of the data that is required or that we have in our database, which is something that Coursera used to do when they first started building their web application. They were sending their entire course catalog to the frontend until they realized that they're sending two megabytes over the wire and that wasn't great.

They've realized that when they're building web applications, sending the right data to the client is actually very important in order to build performance applications, and in order to not waste people's mobile data. They had to think in a different way. If we go back to what would we require, for example, if we were to build this webpage. At the center of this web page, for those of you who don't have Facebook, which does anyone here not have Facebook? All right, that's great. Awesome. I'm happy to see so many hands. I'm going to explain what Facebook, how Facebook looks like.

We have at the center of the page the posts, that is my timeline, so that's what my friends are creating. On your left-hand side, you have things that are related to my friends, things like their birthdays, friend suggestions. Then on your right-hand side, you'll see things that are related to me, like events that I've been invited to, a link to my messenger, and a couple of other things. If we were to think of this from a risk perspective, and how would we actually send that data to the client so that it can populate that type of page? We have that page, what kind of request do we have to make in order to populate the data on that page?

We would probably need to send data about my user. Maybe I would make a request like Facebook and then user, and then send the ID that I need. I might also request data about the events. Remember, we had two different sections of my app or of my page where I displayed information about events, or friends' suggestions, or friends' birthdays. And these are just a couple of ones that I've invented for the purpose of this particular example. If we dive into the events section, which I mentioned earlier, normally we in a REST API, we retrieve all of the information given a URL, given a resource, we retrieve all of the information for that particular resource. In this case, let's say you were retrieving information about QCon, and the organizer is Richard, and then one of the attendees is Matteo, which I hope some of you have attended his talk. He created Fastify and a couple of other Node.js libraries.

Look at this, we are retrieving a lot of information. What's the name of the event, the location, organizer, attendees? If we look at our page, we only need the number of events. How many of you have been in this situation where you need to display one single field for a certain type of object but then you get all of the information when you make that request? Yes. That's awful, right? The way I like to think of it, and it's very visual, it's like where, especially for people that are using mobile applications or they're on their phone and they're opening their web application, we're basically stealing money from them because we're sending all this data over the wire that accumulates over time. If they're accessing our web page on a daily basis, and we are sending all of this information that they don't need, and they're on a plan where they only get one gig of data on a monthly basis, we are wasting their money, and we're wasting all their data because we are lazy as developers and I think we can do better. We can definitely and we should do better. We should be respectful of people's data and money and experience.

Underfetch or New endpoint

This pattern is known as overfetching. Whenever we are retrieving more data than we actually need in order to display a certain webpage, we are overfetching data, and that's one option that we have. When we're building RESTful APIs, most of the times we end up overfetching data. Then when we are parsing, when we are creating our clients, our client applications, we are basically parsing out all of the data that we don't need in different scenarios. There are other options where we could add filters at the end of the URL, which happens very often, or we could sync up with different teams and create a new endpoint, or update an existing endpoint.

But in most organizations creating or updating an existing endpoint means you have your frontend team that needs to communicate with your backend team or they need to create a JIRA task and then talk with the product teams and kind of prioritize, "I need to deliver this frontend feature. Could you please help me?" Then that's not only one field, that's probably two weeks or one month or it might never make it into our final project. I think that happens to almost everyone in this room as well. So overfetching or creating or updating a new endpoint, these are options with RESTful API.

Let's have a look at a different example. Right here, I have a friend suggestion. Facebook is telling me that I should become friends with Golnaz who's a workmate of mine. Not sure about work and friends. Love them. If we were to think from a risk perspective, we have this new URL where we can get a list of friends' suggestions. This is an array with a list of objects, like my friend's name, their profile picture, a list of mutual friends. And here, we come to mutual friends and then we realized that we need to retrieve a new list of items. This is what's known as the N+1 problem. You might have heard about it in the database systems context as well, where you basically retrieve one item and then it turns out that you actually have to make other end requests in order to get all the data that you need for your application.

Remember, here we needed the name of the person, but also the number of mutual friends. Our options here are either underfetching data. This is known as underfetching, where you don't retrieve enough information to display logical unit of your UI. Or the other option, just like in the other case, we have to update an existing endpoint, or create a new one and have to communicate constantly with the other teams involved.

“Why is this important?” you might ask. If you care about web performance, if you care about performance in general, this is a table from a book called "High Performance Browser Networking." I highly recommend this is from the HTTP chapter. They talk about how the user experiences our web application based on how much they have to wait for things to load on a page. You can see here that if they have to wait anywhere between 300 milliseconds and one second, their perception is that the machine is working. This is fine. I'm going to stick with this page and I love it. I need to get my tasks done and I'm going to stay here.

Whenever they have to spend more than one second to wait for something to happen on a webpage, they're going to get bored. They're going to probably context switch because we live in a world of distractions, and they might not have the patience to wait for that. Finally, if they have to wait for 10 seconds, they're not going to be on our page anymore. So both overfetching and underfetching require HTTP. When we talk about underfetching, when we have to make all those N+1 requests, we basically have to make round trips. HTTP requests, that's where we spend most of the time when we're communicating from our client with our backend. Performance is important if we care about our users.

This is another quote from the same book. "The fastest network request is a request not made." It's almost like the bug-free code is the one that doesn't exist, zero lines of code. This is from the author, Ilya Grigorik. I highly recommend the book.

Then, what if we could use a system where we could define our query? This is a query where I want to ask information, retrieve information about our user with this particular ID, and I want to retrieve their name. The events with only the count because that was the only thing that we had to display on that webpage. Friends' suggestions, only their name, mutual friends, and only the count of mutual friends. What if we could make a query like this to our API, and then the result for that would look like what you're seeing on your right-hand side.

That's a JSON object, where I get the data, and I get information, exactly the information that I requested. That's almost like a mirror of the request that I made. This is what GraphQL helps us with. It enables us to write queries where we can request exactly the data that we need to display in our webpage and that's what we're going to get in return.

When we build GraphQL APIs, we start by having all the teams together and defining this contract, defining the schema, what is called the schema. I would say it's schema-driven development because we define the types of operations that we want to have available in a certain API. That's the first thing that we write down. Then this schema is strongly typed. If we look at how we would start that, basically here, I would define type people. Using the keyword type, and then people, and these are some of the properties of the object. You can see here that we have the ID type, that's a scalar type. It's like the primitive type. In GraphQL, a name is of type String, that's also a scalar type in GraphQL. URL, it's not defined by default in GraphQL, but we can also define our custom scalar types. This looks a lot like JSON, but we don't have the comma at the end. So it's a custom DSL language.

We can also have nested types. In here, we can see that our team has people field, and that contains an array of people. We define arrays by using square brackets. One thing that you might have asked yourself is, what's that exclamation mark? That means that that field is required. We always need to return that field as part of our object. Then there are two other things that are important when working with GraphQL, and that is queries and mutations. Queries are basically the equivalent of the get operation in REST. That's how we read data from our API. And mutations are the way we write data, or we update data, or we delete data. Here, I'm just defining those two operations. I have a list of queries. In this case, it's just the name of the queries' teams and it returns an array of teams. Then the mutation is increment points.

I filter by the ID and I return an object of type team. But this is just a contract; this is just the interface. Then the next step in this journey is actually defining, where do we retrieve that data from, because definition won't go and retrieve that data for us. In this case, I am retrieving the data from two existing endpoints. That's the beauty of GraphQL. You can actually either use it in conjunction with your existing REST APIs. So add GraphQL URL as a thin layer API, or you can go ahead and query directly your database. GraphQL in itself doesn't hold any data, so it has to go retrieve that data from different data sources. Those can be existing APIs or other databases.

It also comes with really cool tooling. In this case, you're looking at a thing called GraphiQL. That's basically a UI that enables us to query data. If you think of GraphQL, GraphQL is a graph query language. That's what it stands for. Probably, this is not the best example but basically, if you think of your sequel database, you always have a UI where you can test your queries, and then you take those queries and you implement them in your code. This is exactly what you can do with GraphiQL. You can run your queries, query your API, and then you take that query and you add it in your frontend.

But what you see here is not the actual query. You see that we have access to real-time documentation. GraphiQL uses that schema that we've defined and where we have defined it on the backend. It uses introspection to create exactly those documentation, or you have access to the operations that are available for you, the types of those, of the objects that are being returned. So you have all the information right here. Your API is documented. You don't need open API or swagger anymore. It's updated in real-time. If you modify these operations on the backend, this will be updated without you having to do anything.

In here, you can see how I can query. I can execute queries. Not only I can execute those queries, but I also have auto-completion. I started typing points and immediately it showed me which are the options available here. I executed, and I get the data on the right-hand side. We also have error checking. In this case, I typed increment point. Graphical said that this is actually not correct. Did you mean increment points?

So why do we want to use GraphQL? Because it's performance, it allows us to build applications that are much more performant. It also gives us flexibility so we can request exactly the data that we need for our application. It has great tooling. Graphical is just one example of tooling that you have available for GraphQL. There are other libraries that have been implemented that help you with defining your schemas, using your database model, and many others that are available for you.

Serverless

The next chapter is Serverless, which is why all of you are here in this room. This is a quote from Steve Jobs that I really love. "The line of code that's the fastest to write, that never breaks, that doesn't need maintenance, is the line you never had to write." True? Yes, definitely. This is what Serverless enables us to do. It enables us not to have to write, and this is a double negation, but basically when we write Serverless applications, we either write code without having to worry about the configuration, so we never have to worry about how we configure that web server that hosts our code. Or, most often with Serverless applications, they actually depend on 3rd party APIs.

It's almost like it's a different mindset where instead of reimplementing a lot of the services that are already implemented out there, think about authentication. Who here loves implementing authentication? There are only two people in this entire room that don't hate OAuth and dealing with user management. What about security threats? Many times, when we're implementing authentication in our systems, that's the very first thing that we need to consider. Is my system secure enough? Am I being protected against all those newsletters that are just waiting for a business to leak their users' data? That happens very often, and that's because we implement our own systems that are not protected.

One of the things that's great about Serverless is that we only focus on the code that we write. That means that the web server is being provided and provisioned by the cloud provider. The server itself where our code is being run is also provided and provisioned by our cloud, which means that they're also in charge of all the security patches. We don't have to worry about any of that. Some of you might remember the huge security threat that we had, I think, last year or two years ago at Equifax. It only happened because someone didn't update an mpm package. That was terrible.

Well, in this case, the cloud provider won't update our mpm packages, but they will continue to patch our operating system and where our code is running. Think about Spectre and Meltdown. There was a wonderful talk at the ServerlessDays London last year from David Smith from Digital, at the time vice president of DigitalOcean, and he talked, he went through this timeline of how they had to upgrade their servers in order to patch for that security threat. His whole talk talks about how if all the customers were using Serverless, they would have had a much bigger uptime, and this whole security problem wouldn't have been as big. I recommend you watch it.

Demo

This is the Hello World of Serverless. I do have a couple of videos here, but what I want to do is show you how you can get started by doing some live demos. Do you like that, live demos? Do you think the demo gods will be with us? Awesome. First of all, I want you all to see everything I'm doing here. I'm going to make sure to use a better resolution for that. This is much better. Wonderful. When building Serverless applications, in this case, I'm going to focus on Azure Functions because I worked for Microsoft, and I worked with Azure Functions a lot, so that's where my expertise is, but I'm sure you can do a lot of those things with a lot of other cloud providers as well.

One of the best things that has happened to Serverless is VS Code and the Azure Functions extension. In my case, I have the Azure Function CLI installed in my local. Just to take note there, Azure Functions is open source, which means that the runtime that is running in my local is actually the runtime that we're using in the cloud as well. And that empowers us to also debug applications locally. We don't have to deploy a container that has obfuscated code or anything. The container that we have in our local is the real thing.

Then I mentioned the Azure Functions extension. What I have here in VS Code is just an empty folder open. We're going to go ahead and create a new function right now. In order to do that, I'm going to go to this Azure icon that basically gives me access to a lot of the Azure extensions that have been built to integrate VS Code with Azure. Then I have access here to the Azure Functions extension. We can see here that there's a couple of buttons that I have there. The first one enables me to create a new project. It's going to ask me, "Where do you want to create that project?" "In the current folder." "What language do you want to use for this particular project?"

I can create JavaScript functions, TypeScript, C#, Python and Java there in preview, but I love JavaScript, so we're going to use JavaScript. What this project is used for, it's almost like a folder where we can group multiple functions that share different files. If we share code between functions, this is where we would add it, or they also scale together. They're going to be deployed in the same container and depending on the usage, there's going to be many of them deployed in the cloud, if that makes sense.

What do we have here is a VS Code configuration. This is what we use to debug our functions. This is the configuration for debugging our functions in my local. Then I also have this local.settings.json. This is where I can store my secrets that I'm using in my local.

Imagine you're connecting to a database from your function and you want to define a connection string, this is where you're going to save it. Another thing that I have here is proxies.json. This is an extension of Azure Functions where you can define very thin API endpoints. You can group multiple projects within the same one, but this is just a boilerplate.

Then the next thing that I want to do is actually create that function, the code, the function that can be of different types. Serverless is code that reacts to events. When we're writing Serverless code, we're writing code that reacts to events, and there are different types of events that we can listen to. We can listen to HTTP requests, or to files being uploaded in a storage account, or maybe a row that's being updated in a database. Different cloud providers have different options. In my case, here, you're going to see that I'm going to create an HTTP trigger because I want to listen to HTTP requests. You can also listen to database changes, as I mentioned earlier. An HTTP trigger, and then I'm going to call this GraphQL API.

This is the name of the function. I'm going to create an anonymous function, which means that my URL will be publicly available for anyone that has it, and it can be called. I created this new function. Basically, a new folder has been created for me with the same name as the function name. In this case, GraphQL API. Then I have this function.json file, and this is a description of my function. We can see here some of the options that I've selected earlier, things like I want my URL to be publicly available.

This is a function of type HTTP trigger. This is going to listen to HTTP requests. This is the configuration that the Azure runtime, the load balancer, is reading in order to understand which functions we're deploying and what kind of events they're listening to. Then the methods that we're listening to, and this is where I have access to the incoming request object and this is my response object, so incoming and outgoing.

Then this is a Hello World function, where it's a JavaScript function that receives two parameters. The first one is the context object. This is what we use to communicate with the runtime, with the Azure Functions runtime. You can see that I'm using the context to log information to set the response. There are a couple other operations that we have available, but we can go ahead and run this. I can do that from VS Code, click "Play." You can see that I've already added a breakpoint here. Then in the terminal, we're going to see how the Azure Functions runtime is bootstrapped where it starts running. This is where I get the URL that I was talking about earlier.

Then I clicked on that, and I can see QCon, and I'm already debugging my function. I've triggered an HTTP request. Now, I can inspect all this information. I was talking earlier about the context object. I have access here to all of the binding data, their request and response, and bindings is what we use to read data from different data sources. I was mentioning databases earlier, or storage accounts. We have a Twilio binding as well, for example, or SendGrid. Then I can step through this. We can easily debug our applications. That's awesome. That's something you couldn't have done in the past. I think, nowadays, it's much easier to debug Serverless code with many of the other Serverless providers, but what I love about this one is that the configuration is being generated from me, so I don't have to do absolutely anything. I just create my function using the extension and that's the end of it.

We said that we're building. We just looked at GraphQL, so how about we build a GraphQL endpoint now? How does that sound? Awesome. I won't live code it because I'm a very slow typer, but I've created a code snippet here. You can see some of the previews there. This is everything we need to create a simple GraphQL endpoint using Serverless.

The first thing that we do here is we import GraphQL. This is a JavaScript package where the GraphQL specification has been implemented. So I can go ahead and we see that we import that, so we need to have that module somewhere. Immediately, probably you're going to say, "Well, actually, you need to install it, and you need to have a package.json file somewhere to describe these dependencies." Currently, we don't have any package.json but I'm in my root folder, so I can run mpm in it. Does everyone here know what "-y" is? It initializes an empty package.json with the default values there.

Then I run mpm install GraphQL, and I already know, I depend on Axios as well, so I'm going to install both of them. This app downloads those modules for me, and it created the package.json file. I'm importing GraphQL, I'm also importing Axios, and then I define the schema that you've seen earlier. I have a team object that has an ID, a name and points, and then the queries that I mentioned earlier, teams, and increment points, and this is where I make this query to an existing API, that's another Serverless function.

Then in the main function for my Serverless code, I'm basically calling initializing GraphQL with the type definitions from earlier. I'm sending in the query that comes from our request body. I'm also sending in the, what's called the resolvers. That’s the functions that actually retrieve that data. And if we have any variables or operation names, I'm going to send those as well. Finally, I'm setting the response body to whatever GraphQL has returned to us. We have installed the dependencies. We have rapidly typed the code, so what if I run this right now? Anyone sees any reason why it shouldn't run? No.

I think Vlad here can confirm everything looks great. We have a URL here. I'm going to copy and paste it and I'm going to use postman to test this query. This is the URL. With GraphQL, we're always running post request. We're always making posts requests and then my body is of type text. What I need to do, this is a JSON object, so I need to execute a query. Remember, let’s go. I already have the graphical interface created here. If I say that I want to execute a query teams, and I want to return an ID, I should get some data here.

What you're seeing here is what's known as cold start. How many of you have heard of cold start? Half of the room probably has heard of cold start. For the rest of you, cold start is basically the amount of time a Serverless platform needs to take that code that's stored somewhere in a storage account. We take that code, we create a new instance where that code needs to run. We install those dependencies and then we execute that code, because with Serverless, if nobody is calling our API, or if nobody is calling our code, then our code is not going to be deployed anywhere. It's just going to be stored in a storage account. And as soon as requests are coming in, we're going to take that code and deploy as many instances as needed, so that's the cold start. Or if you're an optimist, how would you call it? Half full glass? That would be the warmup time.

We can see that this is a query that works perfectly fine. So what if I just copy this and paste it here? Obviously, the formatting is not great, so hopefully, this works. It cannot find query. This should work as well. Actually, I have query already. We are calling our local Serverless endpoint that runs GraphQL and we just executed this query. I'm retrieving only the ID from our object. Then if I also want to add points because everything's a competition, I'm going to just add it and immediately it's being retrieved for us. Obviously, points don't make sense if we don't have a name and we're going to use that as well. What you've just seen is, I've implemented a Serverless endpoint that runs GraphQL.

The next step is deploying this to the cloud. I've already done this, but know that with the Azure Functions extension, you can deploy this either from your ID, and probably this is not great because you don't want to right-click "Deploy." That's what everyone in DevOps says. Friends don't let other friends right-click "Deploy." We also have a GitHub option where you can connect your GitHub repository to deploy on every single change to the cloud. You can also use Azure pipelines, and probably very soon, GitHub actions as well, to deploy your code and implement that CICD pipeline that everyone loves.

I showed everything that I wanted to, so just for the Serverless bit. GitHub Deploy. One thing that I wanted to make sure that I highlight here is that with Serverless you've seen me just communicating with GraphQL and existing APIs, but most of the times with Serverless applications, you also read data from databases. Different cloud providers for their Serverless platform will have very good integration with those data sources. So if you think of Amazon, that's going to be Lambda with Dynamo. If you think of Azure, that's going to be Azure Functions with Azure Cosmos DB.

The way you would create a function that listens to changes in Azure Cosmos DB, this is the UI from the website, from the Azure portal. You basically add a new input here, where you select the Azure Cosmos DB option, which is the second row there. Then this is going to be in the function that JSON file. This is going to be defined just like this, where you define the type of the trigger, what collection you're reading from, and what's the connection string. To use it is just a single line of code. The Serverless platform actually does all the legwork for you to create that connection to the database, based on the data that you've described in the function.json file.

With Serverless, we can create reusable APIs. Some of you who've been in the JAMstack presentation, you'll know that reusable APIs are important for the JAMstuck. That's the A from JAM. With Serverless, we also get support in VS Code development environment. You can use the same tools that you're used to using. Also, we get easy integration with data sources. And you have no servers to manage, which is awesome.

What about Serverless and GraphQL? When we started writing applications, we started with monoliths. Then probably we moved to microservices. So we can imagine a microservice there. Now, when we started learning about Serverless, we started writing different functions. Initially, it's going to be very shy development where your functions are going to be very backend oriented functions where you maybe run a cron job to send emails at midnight every month or so. That's a very good use case for Serverless. Or you parse a CSV file that has been uploaded in a storage account and then you save that data in a database.

But as you become more and more enthusiastic about Serverless, you end up with something like this. A lot of endpoints within Serverless and this can be madness. If you have those many, that many endpoints, it's hard to keep track of all of them. So I think that's where GraphQL can help us sitting us an API thin layer, where you have all of those endpoints documented, and you can access them in real-time and see exactly what kind of operations are being supported.

We've already seen the code for this. This is how we initialize GraphQL and Serverless. In our example, we basically created a GraphQL endpoint that was reading data from two different Serverless endpoints that was reading data from a database. But actually, you could create a backend that reads data from a function, from an existing API, doesn't have to be Serverless, or directly from a database, or it can even read data from a cache. All of these scenarios are allowed. They're highly encouraged.

Finally, you might see even more benefit when using GraphQL with different clients. Reading data, displaying that data in a browser, in a mobile application that have different requirements for your API. So when we talk about Serverless, one of the great things about Serverless is that we have easy integration of data sources or with data sources. With GraphQL on the other hand, we have this easy abstraction of data sources. Remember, we could read data from existing APIs, databases, caches, everything.

With Serverless, we get autoscalability, whereas with GraphQL, we have a single endpoint. Not sure if I mentioned that but all your operations are being exposed under the same URL. Then with Serverless, we'll write less code, while with GraphQL, we make a smaller number of requests. This is my queue to what they have in common, which is amazing developer productivity. So you can write performance applications with a smaller amount of code, and by not wasting our users' time and money. We can achieve more by doing less, and this is one of the mascots of Serverless days, which is a conference that is happening all over the world, including London, probably this July, and it's a shameless plug because I'm part of the organizing team.

Thank you so much. Go build something with GraphQL and Serverless. This is a course that I've written. It's on GitHub. It's very short but enjoy.

 

See more presentations with transcripts

 

Recorded at:

Jul 05, 2019

BT