You are now in FULL VIEW

Serverless & GraphQL
Recorded at:

| by Jared Short Follow 1 Followers on Feb 22, 2018 | NOTICE: The next QCon is in San Francisco April 9-10, 2018. Join us!

Jared Short dives into why, how, and when to pair Serverless & GraphQL, with takeaways for implementing the first greenfield Serverless GraphQL API or migrating existing APIs.

Jared Short is Director of Innovation at Trek10. His current focus is serverless and event-driven applications. With multiple production-scale loads in the serverless paradigm, he works daily to establish best practices and push the technology to its limits.

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Sponsored Content


Thanks, Sid, and thank you everyone for coming out. Hopefully you are having a good QCon, really learning a lot and enjoying yourselves.

What I want to do today is to do something kind of fun; we will talk about two new technologies in this space, serverless and GraphQL. We will talk about them separately, one at a time, and try to understand those and then talk about putting them together. As we all know, nothing ever goes wrong when you take two interesting technologies and putting them together, microtransactions and AAA console games, for instance, nothing ever goes wrong there.

So who am I? I'm Jared Short, I'm the director of innovation at Trek10. It is an AWS-advanced authorized consulting partner. A start-up a few years ago, AWS lambda came out and I sat down with a small group of people, about six at that point in time. And I said, guys, I think that we are going to see more in this space, it is a compelling way of building architectures and software in a way that I don't have to worry about the underlying infrastructure necessarily.

So my job at Trek10 is the director of innovation. So I get to play with all of the cutting-edge stuff, and more importantly, how do I take all of these new shiniess and take them down and operationalize them and take them back to the clients so they can build value with those new tools, utilities, approaches, infrastructure, etc. So I have been working with serverless in production, at production loads across start-ups and enterprises for about two years at this point. Hundreds of millions of executions, and we have seen some really interesting and compelling outcomes out of that.


So first, I would like to talk about GraphQL. GraphQL is a technology that comes out of Facebook, they have been using it internally since 2012, they released it publicly, both a spec and an example of implementations in 2015. And I don't know how many of you know about some of the Facebook technologies and the patents and licensing issues that are around those- they had an interesting clause, if you sue Facebook, you lose the right to use any of their stuff in any of their products and some people were really scared about it. But recently, Facebook, MIT licensed GraphQL, react JS and other technologies that are core to the infrastructure, they are building and they are really listening to their community. So GraphQL is a way of modeling your data and your business as a graph, and then giving you a standardized query and mutation language and system that you can go and work with this data and your business as a graph.

So what it really gives you is essentially a one end point for your entire API, unlike a Rust-based system, you have not path-based, but a single end point where you can issue queries and mutations in a standardized pattern and it gives you back all of the data that you would expect. So this gives you an easy-to-understand API layer, a single end point, and all of the aggregated resources can exist behind this end point. Your developers only get exactly the data that they are asking for in the shape that they are asking for it.

And we are going to look at that a little bit later, and understand what that really means. You get some stream lined API extensionability, there are really easy ways to keep improving on your API without impacting existing clients or developers. It empowers your clients to build more efficient requests across the network, and really, you can abstract away a lot of complexities that you normally run into in dealing with APIs. And more importantly, the GraphQL community and tooling is insane. There is SO much power behind this that it is hard to ignore.

Technical Details of GraphQL

And so, some technical details of GraphQL. Obviously, there's a schema and a spec, and importantly, it is a typed API, meaning that everything has to be well-defined, similar to a typed programming language, or something like that, you have to define all of these things ahead of time. That introduces some interesting capabilities later on further in the stack.

So queries, when I say query, I mean a read, that is something that is reading data from a system. When I say mutation, that is something that is writing to the system and causing a data change of some kind. So GraphQL is a specification of issuing queries and this language, this typed API, and it is also an execution engine that knows how to understand those, and there are reference implementations for Node.js in many other languages. But it is this execution engine that can take these queries and these mutations, and figure out how to resolve them for you and provide a pattern to resolve across your data sources and things like that.

And also, one really interesting thing is that it elegantly solves the 1 plus N problem, where typically, if you were to list out, say, for instance, a list of speakers at QCon, and then you also want to know about the talks, right? So I say give me all of the speakers in a REST implementation, I get all of the speakers and it gives me a list of talk IDs that are related to the speakers and then I have to iterate through each of the speakers; for instance, I will say give me information about this talk and this talk. At QCon, there are 179 speakers, I have to do 179 more requests for talks. With GraphQL, you can elegantly solve that problem. We're going to look at that, actually.

Understanding the Schema

So I do have some examples of this schema here, so you can kind of understand it. You have to explicitly define all of your types that are going to exist in your API. We have a type of speaker, a speaker has a first name/last name, a list of talks, you can define lists of types that are denoted by the brackets there. You can have nullables and non-nullables, meaning that the API is guaranteed to return some data back, or doesn't have to guarantee it will return data back, scalers, inputs, there are all sorts of things that you can define in the schemas that make it easier to understand and use later.

Understanding Root Resolvers

And so, if we want to understand this concept of root resolvers: root resolvers are essentially something that says, at the very beginning of my graph, there has to be an entry point to understand how you change, or how to read data from my API. So the standard approach to this is that you have a query route and so, for instance, in this case we have a query route that has two possible things you can ask for it. You can ask for a list of all speakers, or for a list of all talks. And you can pass in arguments or things like that, and you can define these in the schema, for speakers, we will have a limit, like 10 or something like that, and then we can define that and say, as an argument of when you do these queries, how many speakers you actually want, or you can do things, for instance, sorts. I will say that I want the talks in ascending or descending order based on some attribute or field, and then these are returned back in the list of speakers or the talk type.

And mutations are another. I have an instance here, you have a mutation route, and you can take an input type, which can define things like the title of the talk, or a list of speaker IDs so that the back end can eventually resolve to the speaker type. And then we patch in that input type. So everything is well-defined, which means that we can do really interesting things and provide really interesting tooling for our developers and clients using these systems in GraphQL.

This is actually an example- this is like a reference implementation, we actually have systems in production that look pretty much exactly like this. The top is the query route in GraphQL specification, the schema language, and then the bottom is actually a ripped-out chunk of JavaScript that you can look at. And essentially, what we do here, this is demonstrating how light I like to keep the GraphQL, the engines themselves when they are actually doing the implementation.

And so, for instance, we ask for, at the query route, a list of speakers. We have a route which is essentially looking at what is the thing, the node, that potentially called this resolver, we have arguments, we have context, most of the reference implementations of GraphQL will have a content like this. Context can pass through authorization or the users in the system and you can propagate those down through the graph tree. So this is a really simple implementation, we have a speaker service, some kind of library or something in the code that is aware of how to go fetch from our data sources these speakers, or these talks, or things like that.

Understanding Field Resolvers

And now, what is more interesting, you will notice back here, we were at the query route. If you want to navigate further down, we have the speaker type, right? So a speaker has a list of talks, and how do we know how to resolve talks for a particular speaker? You can kind of do a very similar thing, as we did on the query route, but more importantly, when we look at the route, the root at this point essentially becomes the actual speaker node that has returned back from our data sets, or our data resolvers, and we have a list of talk IDs. So the speaker type says, for talks, this is how you resolve the talk fields, the list of talks and we go by those. You model the GraphQL engine in this pattern.

A Thin Wrapper on Business Logics

But the really important part is that, that code that I showed is very similar to what we actually do for our GraphQL wrappers in our production systems. The important part being is that GraphQL is not your business logic layer. We are going to look at some interesting approaches later on, where you can route various data sources and things like that. But your business logic should not live in the GraphQL engine or in the layer; it should be as thin as possible on top of the other services, or the other fetchers of your data and your other business logic.

Developer Client Experience

So another really exciting part is, that you get just by using GraphQL and that schema definition, is that the developer and client experience is really quite elegant. Your developers, for instance, if they are asking for the speakers, they could simply say, I just want a list of speakers and only the first name. I don't want anything else about them. And that's the only thing that they get back; it is an elegant way for providing an efficient interface for making those network requests, for traversing that tree down deeper. And there's this limited API versioning concerns, as a provider of a GraphQL end point, I can add additional types and nodes, I can add additional fields for people to be able to query against as my products and demands grows and people have different expectations of the system. I can keep adding on and extending without impacting existing clients in any way, the requests will not change, and you have to be careful, you cannot remove a field. You cannot change how a field works, but as long as you are being careful about that, you can easily add things without impacting anyone else, or sending them a bunch of data they are not expecting.

And API introspections, this is a completely-typed API. It is really easy to essentially introspect on the API and learn about it, just by playing with it and issuing essentially kind of, like, introspection queries against the actual engine itself. And so what this looks like, there are tools out there, one is called graphiQL, and any GraphQL that exposes the introspection capabilities can do this. So you can get a documentation explore built in, you can navigate and search this because you understand the queries that are possible, the mutations that are possible, and all of the types that are possible, you can get autocomplete on the fields for free, I can go and point this to practically any GraphQL API and this stuff will just work. It is actually really incredible and compelling to have people be able to just point this kind of utility at your API and just explore it and learn by playing with it.


And so, that's GraphQL. I want to talk about serverless. And so, first of all, I want to apologize for the name serverless. I'm not personally responsible for it. But there has been a lot of complaints- if you ever look on Hacker News, it is hilarious. Half of it is complaining, like, serverless still has servers? It is a massively overloaded term that is getting pretty useless, it has come to mean that anything that is using functions as a unit of deployment, people -- I really enjoyed this, it happened two days ago, somebody released a new serverless platform, and the first thing they do is say, go spin up a server, and put the thing on it. You missed the whole point, guys.

So, more importantly, what serverless kind of means for myself, for our company, for our clients is that obviously, you are still running on servers, everything is running on servers, and you just don't care. You don't have to worry about operating system patching, you don't have to worry about the server maintenance, if it dies, if the auto scaling group is going to heal or self-heal, you don't care about that, I have just given the code to somebody and told them what events or requests it accepts, and they will handle the mapping and scaling and making sure my code responds to those requests.

So my responsibility, as a developer, or even as an infrastructure ops person moves higher up the stack. Can I spend more time looking at our code and building better tracing in our code, can I figure out what our data sources are doing better, I can do that instead of worrying about, you know, is it up to date, for instance.

And the second core tenant of serverless is building at the microsecond increments. You never pay for I/O, so these are for the game companies out there, these are micropayment and transactions done right. I want to pay at the 100th millisecond scale, for instance, not the minute or the hour scale or something like that. And this pairs really well with event-driven models as well. We will not talk a lot about that today, but if you ever get a chance, really try to dig in and understand how do I stream events into something like serverless, and why is it so powerful. It is this capability of having these massively-scalable systems, and you don't have to worry about a whole lot of the scaling that you normally would have to. So many of the complexities of scaling and availability are absolutely gone.

And some other kind of hidden benefit that you start to realize as you build more with serverless infrastructures, and start to really diving in, is that you get these, like, massive total cost of ownership drops, when you are not worrying about the underlying servers or infrastructure or things like that. When the systems scale simply, by what I would like to say, turning up a dial and say I will input more dollars and get out more scale without having to do anything else, it works very elegantly. I can give you a client, an idea for scale, that does millions of requests a month, and their compute bill is, like, $50 a month, and we are storing hundreds of millions of images. So the storage cost is thousands a month, and the computing cost is $50 a month. People do not understand it. If we were trying to run servers to do that, it would be WAY more expensive.

So with all of these TCO games and economics, you can really focus on the higher value targets in your organization. Your ops teams can say, how can we take what this service is doing and operationalize it in such a way that we are providing, for instance, a better event stream or data stream to other parts of our business, to other clients in our business?

You can focus on the higher-value targets. And the interesting thing, you can fail fast and learn things really fast. We do that, we can do it for almost free. And you can go and build a serverless system, a practice I would like to do with some of our clients is go and say, hey, what is like something that is really challenging that you have to solve, like, in the next week that someone has asked you to solve? Can you take the next four hours and do an MVP of that and launch it and have it working inside of the next four hours? Sometimes it works, sometimes it doesn't. But the key is, we found out inside of one business day if we have a viable approach, or if it is totally going to fail. We fail fast for practically free, and we learn really fast.

Developer Hints

Um, let's see here. And so, some developer hints: as you are starting to work on these systems, and start looking at serverless, your functions as a service provider, whether it is lambda, or Azure, or Google compute, is really irrelevant. Something that I have seen recently is people are complaining, do not use Azure, or Lambda, it is vendor locking, you will be stuck in their model. That is not true, you can abstract away whatever they are doing in terms of how they are getting you the events fairly easily. There are utilities and people out there that kind of provide the wrappers, you can scale across these different clouds.

And now, what is it true, and is the case, is that people like AWS, people like Microsoft, people like Google, are providing other really compelling services that tie really closely to their functions as a service, whether it is Lambda or Azure or whatever. And once you start leveraging those services, you have vendor lock-in; that's the plan, that is how they are going to get you. That said, for us and our clients, we are totally for it, because Amazon obviously builds really fantastic utilities. Azure is coming out with cool stuff, durable functions with affinity streams, things like that. You can do cool things and you have to evaluate and say, is this platform going to provide more value than I can build myself? If so, tie yourself to it. And functions as a service itself, lambda, they are not a vendor lock-in risk, really.

Why Serverless GraphQL

And, all right, so we have talked about serverless, we have talked about GraphQL. What happens when we put these two things together? What does it actually look at? So, right off the bat, we can obviously assume that we're going to get ALL the normal serverless wins. We have scalability, availability, I don't have to worry about the operating system, or patching, all of those great, great benefits.

And now, you also get all of the benefits of GraphQL. For instance, clients and the graphical and things like that, and all of that typed API goodness, and those systems, you get all of that as well. And now, what is really cool is that this is very much a 1 plus 1 equals 3 scenario; we get a higher value than the two components, I can mitigate things like resource exhaustion attacks, I can do complex resolvers and interesting things based on servers giving me isolated resources and execution environments.

Resource Exhaustion Attacks

So resource exhaustion attacks is something that is fun that I didn't really think about, or understand, until one day I was like, hey, why is our system not working? Why are these requests hammering our dynamo table so hard? I thought they were supposed to be serverless and Dynamo is this magical thing, why is our system coming to a crawl? Well, resource exhaustion attacks, you can think about them as a denial of service attack that is, A, really easy to do against yourself in some circumstances, not good. B, really, really easy for an attacker to hurt you really badly with minimal input from their side. So a resource exhaustion attack is an idea of a deeply nested request, you can use recursion, pagination abuse, we will look at that in a minute. And we will talk about useful optimization and prevention technique.

So GitHub, I don't know if you have seen their GraphQL API, they approached this by enabling pagination and enforcing pagination and limits on any node request. For instance, most types you have to say, I want up to 100 of this particular node, and they will not give you any more than that. And then you also have to go through the pagination as well. You can do maximum death checking, if you think about a graph and traversing the graph, you can say, for instance, that anything that has a nested graph of deeper than five nests, we're going to throw it out, obviously, never run it. And we are going to look at another huge performance win, which is request batching and caching.

So resource exhaustion attacks, this is what it looks like. And so, for instance, there's a Star Wars API out there. I don't know if any of you are aware of it. There's a GraphQL wrapper out there for that Star Wars API. But right here, what we are essentially modeling is the concept of films and characters in Star Wars. So, say we want to list out all of the films and then we winnow all of the characters in each film. That's a pretty simple GraphQL request. This is actually using the relay specification, which you have to have edges and node in relay. You can simplify some of this stuff; following the relay spec, it is not a terrible idea. It helps make your requests easier.

But you can kind of see, if you look at a couple levels, I have all of the films and all of the characters. And say somebody at your job says, do you know what I want, I want the films in these characters and I don't care about the maps and reduces, so I will just map the films to every character. I can do that; it is not great, but it will resolve it, three levels deep. And now, some enterprising individual, or just mean person, decides, hey, you know what I actually want? I wanted all of the characters of every film for every character in each of the films of each of the characters, and we're out of memory, our memory can actually get blown up in these requests, especially in a server model, someone making a request, you can eat an entire CPU core and a ton of memory, especially if you have not put a lot of thought into protecting against this attack.

REA Mitigation (The Easy Way)

This is an interesting attack model. When we apply this serverless model, I tend to be lazy is and think, what is the 80 percent solution we will get away with; I don't think if this product is going to make it, I don't want to implement these extra protections and enforce limits, paginations, or artificially limit my clients in any way before I need it.

So when I apply the serverless model to GraphQL, there are interesting things that I can do. So if you look at serverless, the idea is that each request is isolated in its own running container, lambda function, or what have you. And since each GraphQL execution goes to its own lambda function, its own resources, it has its own GB and a half of memory, or CPU, I don't have to worry about somebody attacking my end point and completely destroying my one server that blocks everybody else; they are destroying one Lambda function, which is theirs and assigned to their request, I don't have to worry about it.

And then also, interestingly, the other thing that you can do is, the infrastructure on most of the functions of the service providers give you this idea of time outs. So instead of having to look at the maximum depth of a graph, there might be use cases where I have a really deep graph. What I can do instead is say, any request that should normally succeed in, let's say, five seconds, or 10 seconds, all of my valid requests at the high end are 10 seconds. So instead of that, all of our requests that are coming into our system, we will limit and say, if this goes for 15 or 20 seconds, whatever they are doing is probably bad and we will throw them out of our system. You want monitoring and alerting around that kind of thing if it is more common, but you get an 80 percent solution for basically one little configuration change in your system.

And obviously, in the serverless model, people also like to call it server full, or service full, I should say. Service-full, so you should leverage other product and services that are out there, like web application firewalls, if you can have somebody else U.S. -- just handle a normal DOS attack, a lot of IP and people coming in and hitting you from a single IP address, throttle those. Leverage the normal mitigations of attack vector.

Request Batching & Caching

And batch and caching, this takes some reasoning about. There's a query that says, give me all of the speakers and give me all of their first names and then all of their talks and the title of the talk and then give me all of the speakers of each of the talks. So this is not an optimal query from the GraphQL standpoint. But it helps to illustrate a couple of things if you think about caching and batching.

So if we were to just do a naïve implementation and, for every time we try to run one of these resolvers, thinking back to the beginning when I think about the resolvers in GraphQL, and resolving IDs by data stores, if you had a strictly naïve implementation and we ran the query and said, give me a list of all the speakers, and then give me all of the talks and the information for each talk, for each of those speakers, and then actually give me each of the speakers for each talk that I'm resolving, you end up with 359 network requests if you resolve those singularly. So from the front-end, the N plus 1 program is gone, but from the back-end, you are destroying your down stream systems.

And if we implement cache-only approach, which essentially says, all right, I'm going to ask for all of the speakers, and then I'm still going to have to resolve all of the talks, I have not resolved those, and then I will ask for all of the speakers again, if I store the speakers from the first time I requested them and the system is smart enough, I only have 132 requests or something like that, because I asked for all of the speakers and all of the talks, so that is 130, well, that is 132, I think. And then when I ask for each of the speakers, I have them cached, I don't have to do more network requests.

And batching is where you essentially say, instead of immediately firing every single one of my requests whenever my system decides it needs to ask for the talk data or for speaker data, you essentially can do things like tap into the node event loop, or in other languages, there are other constructs and batch up and say, instead of asking for them individually, I will ask for them all at once at the level. So it gets to three requests; you ask for all of the speakers and batch up and ask for all of the talks and then ask for all of the speakers again. Notice we are not caching, this is batch only, we batch up all of the speakers that we need, requests, we have three requests. When you implement cache in batching, that's the optimal point. We ask for all speakers in the systems, we batch and ask for all of the talks in the systems and get all of those, and then we ask for all speakers that are batched again and we have them existing in cache, down to two requests.

The important part about this is this is really hard to do. If you ask your developers to do it, they are going to make a mistake and make it worse. Facebook realized this, as they were building out GraphQL and they released a project, data loader, a reference implementation for Node.js and how to do this caching and batching technique in a way that still looks like a normal implementation of just asking for something from your data source in a one-off, like, I want this ID, I want this ID, but what it will do on the back-end is batch those up into a single request with caching and things like that.

Complex Resolvers & Multi-Data Sources

All right. So once we start looking deeper into serverless, we come -- in GraphQL, we come to the idea of, what if our actual resolvers are doing different things? You can actually unify different data sources, for instance. I have one GraphQL end point, I'm going to go and say, hey, you know what, I want to hit a legacy API system, a relational database system, and a NoSQL system. You can do interesting things in your resolvers, if I had the talk and the speaker service, those can exist in two completely different end points, one can be in our user management system, or user database, and the relational database, and our talks, for instance, might actually be in a NoSQL database.

Your GraphQL end point itself, that is running in the serverless function or things like that, could actually talk to other smaller microservices running in their own lambda functions, and go and have those actually fetch your data. And where this is really interesting is, for systems that you might have more compute-heavy operations that have to happen and you want to spin them off into their own lambda function and not worry about blowing out the resource in your single lambda function, or single function execution, but you want them to operate quickly and efficiently and have more resources than just the one.

This is an interesting approach; it becomes more interesting with something else we will talk about in a little bit. This is certainly a valid approach. And one thing I have to bring up, so there is still these problems in the serverless world that we're all trying to solve. And that problem is traceability and debugability. I don't know if any of you have worked with the serverless space or anything like that. But it is actually ridiculously hard to even just get, like, simple information out of these systems, let alone do something like this, where my one request might blow out across three, five, 10, or a dozen more executions, and then try to figure out, somebody executed a request asking for something and then I have no idea where it went. This is actively being worked on by all of the big providers, this is a hard problem. And just know that you are going to sacrifice what you are normally used to in a server environment in terms of debugbility and traceability to go from an approach like this. It takes as lot of work and instrumentation; it is essentially distributed tracing on a whole other scale.

Schema Stitching

Okay. So, I kind of alluded to another approach with complex resolvers and multiple data sources. But there is this new opportunity in GraphQL, and this growing movement for something called schema stitching. And so, the folks at Apollo who are doing a lot for the GraphQL community essentially developed this idea: you know what, I have all of these potential GraphQL API and end points, why can't I do something to put them behind one particular end point?

So, in this case, for instance, we have two GraphQL APIs; they are completely distinct from one another, and we have essentially there is a universe API where you can actually ask for a details about a particular event that is going on at particular locations or things like that. And then we have a weather API, where I can ask for, at a particular location, what does the weather look like, or what will it look like at a certain point in time, what is the forecast? So there's a connection between these two. I would like to have the weather at the location of a particular event. So we can schema schema stitch these two APIs together, and we might get something that looks like this. This is a naïve implementation, when I asked for it, I said, I want an event of some ID that I somehow found, and whether it is something other listing or something like that is irrelevant. I'm asking for the event in this query. And now I'm asking for, hey, somehow I happen to know that the location for this event is in San Francisco, so I actually want the weather for that particular location as well. Right?

So this is okay, great, we get, like, one less call, we kind of have a unified API. It is not optimal, and it is not -- you kind of need knowledge ahead of time from the client-side, see we can obviously do better. And so what schema stitching, what people are working on is this idea that I can create these systems independent of one another, and then create and name my resolvers in such a way that I can resolve from one type in one schema of GraphQL, to another type to another GraphQL end point. When I link these two together and the engine understands how to resolve across them with these remote executable schemas is what they are called, I can do something more compelling.

Schema Stitching with Links

So I will say I want the information of an event, or a particular ID is irrelevant. And I want the weather that is going to be happening at this particular event, right? So you have gone through something where you have to have this preconceived information about where this event is going to happen before I knew where it was going to happen, and making the request is hard in the first place. Now it has become a system where your developer, the person using your stitched schema does not even have to know there are two schemas, or two end points behind this. So this gets really interesting.

Schema Stitching So What

And what I'm really excited about this for is now I can kind of take the typical practices and learnings from microservices and apply them to GraphQL. And now, I can have people experiment really quickly, and build new services really quickly in GraphQL, and provide these small chunks of functionality and, in doing so, in using GraphQL and in building these schemas, I create these essentially standardized contracts for integrating together these various services through GraphQL.

And what we are starting to do, and what we are starting to work with folks on, is how do we enable business units to produce things in GraphQL in such a way that it is valuable to them internally, it is valuable for them to consume their own services, and present them in such a way that other departments, other business units, can look at these things and say, hey, you know what, your weather service is great, it would match really well with my boat rental service. These distinct operations can start pulling and stitching these things together, forming new products, extending products, in ways that were traditionally not very easy, and then presenting them to end clients in these seamless patterns.

A Word of Warning

And so, with that all said, it is really exciting, it is still a super new space, we are still trying to figure things out. So there are some words of warning that I would like to give all of you before you go running back: my first word of advice, I found this the hard way, don't be this guy. I have tried to go to companies and say, you know what, GraphQL, literally everything in your life will be amazing, and then they are like, what do you mean REST APIs? We are still using SOAP. So, yeah, if you try to do this kind of thing, it ends up being a lot more difficult than you would expect. And it is -- it gets really hard. And so, obviously, don't force it.

When Not to Use (Force) It

And where I would not suggest doing this is a lot of this is common knowledge. If you have non-problem legacy systems that are rarely used and you have no problems with them, don't try to re-write it in GraphQL. It is also not great for bigdata exports, like a REST API, you don't want to send GBs or MBs of data, and asynchronous returns are fine, I want a report of something, I get back a url or something that I can check and get my data, that fine. And more importantly, like any new technology, internal buy-in is key. If you have inertia against you, if people are building on REST and you have a solid microservice infrastructure on REST, or Side Car, or things like that, you will have some momentum against you there. So don't force it, start small in your internal teams and get the small wins internally, and evangelize based on those successes.

And now, when it does come to non-problem, and rarely-used legacy systems, or you have other services that you depend on, and other folks that you depend on and their services, you can do some fun stuff where you can essentially wrap their REST or SOAP API or your system with your GraphQL system internally and consume it with those -- with your own GraphQL wrapper around it.

And then, in that way, you can kind of just be sneaky about it; they don't have to know that you are doing it, and then the developers on your team can consume it in a more usable pattern and you can start sharing that with other units; hey, I made the service, like, not be terrible to work with. You can be sneaky about it.

Protect Downstream Systems

If you do something like that, or you have other downstream dependencies; you have to be a good citizen of your business, whether it is serverless, or GraphQL, or serverless GraphQL, you are only as strong as your weakest resource. And so, if you have a typical relational or SQL database behind your serverless GraphQL engine, your serverless stuff can massively out scale any kind of relational database. We have had situations where we turned on a serverless service, we are getting 10,000 requests per second, we do not flinch because the serverless, whatever, it is fine. And if we had particular relational databases behind it, or things that do not scale as well, we are in trouble because we are only as strong as the weakest resource.

So play nice. So things with queues and caching, you might have to put into thought and saying, I need to be nice to folks and I don't want to over run the down stream systems, people will not appreciate that they have a massive system that can take down theirs, they would not appreciate it.

Back to Basics

And some things I did not talk about a lot is the basic stuff. And this is stuff that actually, while I say it is basic, it becomes interesting to solve. How do I pass authorization down through my various GraphQL end points? So there's a context in GraphQL that I talked about, and the implementations are kind of different between the language and reference implementations, and it differs between even organizations, whether you are using something like JWT, or other token approaches or things like that, it really depends on the organization and how you have decided to approach that. So pagination is not a completely solved problem, there the relay spec, the Facebook spec for how they handle pagination through GraphQL schemas and gateways, and then good documentation.

So this is actually really important when you start playing the game of having lots of microservice GraphQL end points. Yes, you can have GraphQL spit out this nice autocomplete searchable documentation, but it ends up being that people want better examples. And so, okay, what is the actual right way to make the requests for these speakers and talks? You have to give ways so people can go and browse best practices or implementations.

Getting Started

And so, if you want to get started on this stuff, there's a few resource and utilities out there. Dan Shafer worked very hard on the GraphQL stuff, he talked about pagination in auth, and interesting approaches. You can look that up on YouTube.

The Apollo stack, they're a great group of guys doing interesting stuff with GraphQL, providing metrics on engines, and building client-side caching and schema stitching, and GraphQL schema cheat sheet, if you want to understand the API and things like that quickly.

There's a serverless framework, GraphQL boilerplate, this is how to build it on AWS with lambda and DB and things like that.

And GraphQL is -- Graph Cool, a hybrid approach between functions as a service and GraphQL. If you are interested, look at this, the idea is that the resolvers and custom resolvers in GraphQL should be this hybrid approach where you can directly map to a function execution. They kind of take the complexity out of those multi-targeted resolvers and things like that and they say, the resolver, it will have a function behind it. And they will provide a DSL for defining what a serverless GraphQL implementation could look like.

And so, with that said, thank you. I hope that you learned something, it was fun for me. And go play with serverless in GraphQL.

Live captioning by Lindsay @stoker_lindsay at White Coat Captioning @whitecoatcapx

See more presentations with transcripts

Login to InfoQ to interact with what matters most to you.

Recover your password...


Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.


More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.


Stay up-to-date

Set up your notifications and don't miss out on content that matters to you