Transcript
Rendle: I'm Mark Rendle. I'm here at the start of the API track to talk a little bit about the history and how we got to where we are today, and where we might be going next, and show you a little bit of code along the way. Generally, get everyone set up for the day. Who's planning on spending most of the day in this track? There should be some interesting stuff. I'm expecting lots of different people to come along and say this is the right way to build an API. Then at the end of the day, you'll have eight right ways to build an API, and you can just go away and choose which one you liked the best.
Brief history, I will try and explain how we got to where we are from the 1970s, which is when really programming got started. People were doing stuff with COBOL and Fortran. We were starting to get into high-level languages like C and everything else.
Application Programming Interfaces (APIs)
We had to create application programming interfaces. We had to create ways that if we made a piece of software, whether that was an order management, stock control, usually finance back in those days, banks were early adopters. They had lots of different applications dealing with different things. Those applications needed to be able to talk to each other. They had to create APIs. We've been using APIs, as programmers. Pretty much everything we use could be considered an API. Even the programming languages that we write code in, whether that's C, or C#, or Java, or Python. There's a piece of software which might be a compiler, or it might be a dynamic Language runtime. We are using the API for that compiler or dynamic language runtime, to use that to create our software. Then everything we interact with, whether that's a database, or a messaging system, or a queue, that has an API that we talk to as well. Then of course, we got the higher-level APIs now of talking to other services in service oriented architecture or microservices.
Back in the 1970s, things were largely running on mainframes. That's very few people, very few organizations in those days would have had more than one mainframe. That wasn't the point, mainframe, is you bought a mainframe. If you needed more processing power, you bought a bigger mainframe, or you made your mainframe bigger because you could just buy another couple of fridges and tack it on to the end and add another 2 MBs of disk space.
Just a little bit about me. This is actually my 31st year as a professional software developer. I've been doing this since I was 16. I started on UNIX systems with Wyse terminals. My first day at work, they said, "Learn C." They gave me the Kernighan and Ritchie book with Hello World, and a Wyse terminal. I sat down. I said, "How do you edit files?" They said, "Use vi." Then I spent six months learning vi.
The mainframes that existed, they had lots of difference. They had time sharing systems. They had multiple processes running. Those processes needed a way to talk to each other. One of the first examples of an inter-process API was Message Oriented Middleware, or MOM, which was pioneered by IBM in the 1970s. Anything that was pioneered in the 1970s, you can pretty much assume it was IBM pioneering it. Message Oriented Middleware was what we would think of now was a RabbitMQ, really, but with a little bit of intelligence so that something could send a message. Then that middleware could transform it, and then send it on somewhere else. That might get a response back. It was very asynchronous. It's the thing that we do today and we think we're being very clever, using Kafka, or RabbitMQ, or AMQP. We're basically doing the same thing that people on mainframes were doing with COBOL and MOM in the 1970s.
The other API that was created then was ISAM, which is how databases used to work. The first database I worked on was Informix SQL. Any Informix people in the room? Other people were doing Oracle. Somehow I ended up at a place that was doing Informix. Informix was obviously superior. An Informix SQL database was just a bunch of C-ISAM files. We used a library called C-ISAM to talk to the ISAM files. Then Informix invented ESQL/C, where you would put dollar symbols into your C code and write SQL statements in it. Then it would go through a pre-compiler that just turned those into the C-ISAM API calls. Then it would compile and link. We had a table tennis table, because compiling and linking in those days used to take three hours. If you broke the build, you had to stay and fix it for the next morning. If nobody broke the build, then whoever lost the table tennis tournament had to stay and make sure everything was ready for the next morning.
Object Oriented Programming
Then the 1980s happened and object oriented programming came along. Object oriented programming made APIs much easier to build because you could create an object and you could send it off to somewhere else. Then you would get another object back telling you what the result was. That might all have been in a single process. Alan Kay, the creator of Smalltalk and object oriented programming generally, wishes he'd called it message oriented programming, because he said people got completely the wrong end of the stick. They thought object oriented and they started talking about dog being a subclass of animal, and animal being a subclass of mammal, and whatever else was going on. It was message oriented programming. Within the process, within the libraries and the frameworks that you were using, you have an API. I use .NET. When I want to talk to the file system, I use things in The System.IO namespace to talk to the file system, and that's an API.
We also got shared libraries in the 1980s. On Windows, we had DLLs. You could publish your software with a DLL and then somebody else could link to that DLL using a .hfile. Then they could consume functions that you distributed, which made it easier to access your file formats on the file system. Did anyone use Mappy.dll back in the day to interact with their mail system? You have my sympathy. We also had OCX which was the early ActiveX things. On Linux we had Shared Objects. Microsoft of course invented Dynamic Data Exchange, which was a way of talking between two processes running on the same machine. I have very unhappy memories of working with Excel over Dynamic Data Exchange from VB6. We also got sockets. That was a 1980s thing. Network sockets but also UNIX sockets, domain sockets, nice ways for two processes to talk to each other on a system.
In the 1990s, client-server came along and everybody went rushing for clients. Client-server basically meant, you have a Novell NetWare box, which is running your database, or a UNIX box, or a Linux box. Then you have rich client applications on lots of Windows 3.1 machines or Windows 95 machines. They talk to the database over the network. That database protocol was an example of an API, and SQL is a good example of an API language.
Common Object Request Broker Architecture (CORBA)
We also got the Common Object Request Broker Architecture, CORBA. Are there any CORBA casualties in the room? How's the PTSD? Yes. It's still out there. People are still using it. People are still getting stuff done with it. Good luck to them. IBM basically had control of CORBA as they did with a lot of the standards. Microsoft went, "We're not doing CORBA. We're going to do the Component Object Model. We're going to do COM ActiveX. If we want to do messaging over the network, we'll do DCOM and COM+." It was Microsoft's answer to CORBA, which basically became widespread simply because everybody was on Windows, at that point. COM, incredibly good fun writing COM code in C++. That is not something I would recommend doing.
Of course, in the '90s, in 1993, Tim Berners-Lee invented the World Wide Web. British, yes. Back when we were something. The World Wide Web, that's just a massive API. That's just a way of talking to other machines and saying, "Give me back some HTML, and some images," and everything else. This was the point at which the API really became the default. It was the default way of interacting with computers. You would create some software. That software would talk to a variety of APIs in order to do its job.
Service Oriented Architecture (SOA)
On top of that, we invented service oriented architecture. Bear in mind, we're still in the same decade where we thought client-server was a great idea. By the end of that decade, we're going, "Client-server is a terrible idea. We've got all our business logic. We're repeating it over again, on all these desktop machines. We should be pulling that business logic out of there, and putting it into services that were running over HTTP. We should be talking to those from the clients." We started to move towards this idea of a Thin Smart Client that could talk to these back-end services. Just to make sure that all these back-end services could talk to each other, and all the different clients could talk to those back-end services, we had to come up with a common language that they could all speak. We invented SOAP. Sorry, if you're too young to remember SOAP, if you haven't been doing this long enough to remember SOAP.
In the year 2000, Roy Fielding at the University of California, Irvine, for his doctoral thesis, wrote a paper called, "The Architectural Styles and Design of Network-based Software Architectures," which you may know better as REST, or Representational State Transfer. It's one of those occasions when somebody's doctoral thesis has become a rallying point for zealots to start bashing everybody else over their head and generally being unpleasant. The other one obviously being Karl Marx's Das Kapital on the Soviet Union. Yes, I did just compare REST zealots to Stalin, because I can.
Also, in the 2000s, we got XMLHttpRequest in browsers. XMLHttpRequest was actually invented by Microsoft. It was built into the Internet Explorer browser. They only invented it. They never meant for it to be this whole AJAX thing. They never meant for Web 2.0 when everything else to happen. They literally invented it to make Outlook Web Access work, so that every time it checked for mail, it didn't have to refresh the entire screen, and redownload all those ActiveX controls.
JSON
JSON was created in 2001. Douglas Crockford created JSON as a subset of JavaScript. The idea was that if you serialized, you created a JSON string, and then you parsed it across the network, you could literally just exec it, because dynamic languages on the client side. It would become an object and then you will be able to work with that object in your JavaScript code, which is a terrible idea, because all you have to do is put something nasty into the JSON and the JavaScript client will exec it. We ended up having to get JSON.parse anyway, which completely negates the point of JSON being a subset of JavaScript. Never mind, it still is.
In 2006, service oriented architecture was just about to run out of steam, really, and so Microsoft invented Windows Communication Foundation as a way of doing service oriented architectures. I'm sure most of the people who raised their hand for who does .NET, will have done WCF in the past. We'll be looking at that in a bit more detail later on. REST came out of Roy Fielding's very vague thesis.
History
Also, in 2006, I think it was, Apple introduced the iPhone. Microsoft had been trying to make smartphones for about three or four years and Apple just kept their powder dry. Waited until capacitive touch screens came along, and then went, "Here you go. That's how you do it." Who hasn't got a smartphone in their pocket? That's a good one because you don't have to raise your hand. Mobile devices obviously made a big difference. We suddenly got to this situation where you wanted the data on your mobile device to be current. We created systems that would talk to APIs pretty much exclusively. We couldn't run databases locally on that machine. We couldn't do the thing that we'd been doing with salespeople going out with laptops with a local copy of the database. Then they'd come back at the end of the day and we'd run a SQL Server merge replication, synchronization, and wonder why all our tables have disappeared.
The late 2000s was also when we started to get what we actually think of when we say APIs now, which is public APIs over the internet, to services and software that's hosted on the internet. eBay introduced their API, actually, in the year 2000. I'm quite surprised that eBay is 20 years old, but it is. The eBay API, they invented so that people could write bots to snipe auctions. I think that's the only reason they did that. Amazon created their first API in 2002 so that people could search the Amazon stock database and display information from Amazon's store on their websites.
Twitter came along with their API in 2006. I think Twitter's one of the best examples of a company launching with a very good API, and giving everybody access to that API, and saying, build stuff based on this. There was a plethora of amazing Twitter clients that people built. Whatever you wanted to use Twitter for, whatever Twitter meant to you, you could find a client that consumed that public Twitter API, and represented the data in a form that was meaningful to you. Then three years later, they shut it down, and said you couldn't do that anymore. Facebook in 2007 introduced the Facebook Platform, so that you could automate giving your personal data to Twitter, and you didn't have to sit down and do it with a mouse, and a smartphone, and a camera.
The other thing that we started to see in the 2000s, it was a very busy decade. Then last decade was even busier. This next decade, I'm seriously considering just staying in bed and seeing where we end up in 2030. We got cloud computing. Jeff Bezos and Amazon had internally told all the teams building all the different bits of Amazon, everything should be an API. Every team should be able to consume everything from every other team, using an API that you have published documentation and specifications and libraries for regardless of what platform they're using. We've got some people who were doing JavaScript. We've got some people who were doing Python. We've got some people are doing C++. All these things should be able to talk to each other. He extended that even to the provisioning of infrastructure, virtual machines, and storage, and everything else.
Then at some point when we've got this amazing object storage platform, and we've got lots of spare capacity on it. Why don't we just sell access to that? They created S3 in 2006. If you had something that was running on the internet, and you needed to be able to store data somewhere, you could just say to Amazon over their API, "Can you store this for me?" It would keep it and you would pay based on the amount you had stored as well. This is another trend that was introduced at around this time with APIs was the pay-as-you-go model. Rather than buying a license to a piece of software, buying SQL Server and then having to pay per core that you're running it on, and provision a piece of hardware for it. You had an API. You could say, "Create a bucket in this data center here. For every gigabyte of data that I store in there, I will pay 3 cents per month." The API became a unit of sales as well as a programming tool.
Microsoft introduced Azure in 2008. It was actually called Windows Azure, back in the day, before Satya took over and decided to deprecate the term Windows. That had a similar set of things. Then since then, we've seen other people come along, obviously Google, IBM, but also smaller companies like DigitalOcean and Rackspace have managed to compete with big players by providing APIs to their services as well.
Then we got into the 2010s. I can't remember exactly when it was everyone started talking about microservices. It feels like 30 years ago, but it's not. It's much less than that. Microservices is taking that idea of service oriented architecture to its extreme. I listened to a podcast a while back with a microservices person who was suggesting that every single object in a framework had the potential to be a microservice. They said, "What, int?" He said, "Yes, why not? I'm provisioning an integer in U.S.-East-1".
The funny thing was, when people started talking about microservices, I was doing .NET development. I was creating ASP.NET Web APIs at the time, and ASP.NET MVC websites, and WPF applications that talked to those ASP.NET Web APIs. Everything had to be hosted on IIS on a Windows Server, and provisioning Windows servers with IIS was not fun. Considering they called it the integrated pipeline, deploying an ASP.NET website onto IIS is way more complicated than it should be. The idea that we were going to create even more smaller Web APIs and deploy them all over again, two or three times a day, was absolutely insane. I couldn't see that at all.
Microservices didn't really make sense until containers came along. Docker put a layer over cgroups and everything else. Docker created an API, an approachable API over the namespaces, and cgroups, and boundaries, and process limits, and everything else that are inside Linux. They called it containers. They gave us a command line interface to interact with it. They gave us a HTTP API to interact with it. You could connect to docker.sock, and you could send HTTP requests. Or you could connect to your Docker server in the cloud, which of course, you've secured with an SSL certificate. You could spin up machines and scale and do everything else. Even that was a bit too complicated for most of us.
Kubernetes
Then we got Kubernetes. Kubernetes is the ultimate infrastructure API. It's evolving and evolving. We're getting to the point now where Kubernetes actually has APIs built into it, where you can say to a Kubernetes cluster, provision another Kubernetes cluster. I want to create virtual mini-clusters inside my cluster, and do interesting things with that. Once we got containers and Kubernetes, it became very easy to deploy services over again. It really enabled that continuous integration, continuous delivery pipeline. We leveraged APIs all the way through. We had webhooks, and you would commit something into GitHub. That would trigger a webhook, which would fire off your continuous integration on Jenkins, or Bamboo, or as your DevOps, or whatever you're using. Then that would build your software and it would run the automated tests. It would potentially create Docker images and push those to a registry. Then pushing that to a registry would trigger another webhook which would call another API, which would then deploy that new image into production, and slowly cycle down the old one. All done with APIs, and mostly over HTTP still, at the time.
We also got WebSockets. WebSockets was a wire browser. At this point, we'd started building everything into the browser. Everything was just a browser application. A WebSocket was effectively TCP/IP. In the browser, you could open a persistent connection to a server, and just send data backwards and forwards over it. We could build the same experiences inside the browser that we've been able to build with WCF, or whatever it is you Java people use, I don't know. Spring or something, isn't it? Seriously, I have no idea.
Wire Formats
One thing that all these APIs have in common is they need a way to talk to each other. The client needs a way to express a request. The server needs to be able to understand that request and act on it, and then create a response. Then the client has to be able to understand that response. There are two real things involved in that process. One is, how do they send that backwards and forwards? Is it over TCP, or HTTP, or through a shared file system, or some shared memory? The other is what format are we going to use for that message that we're parsing between the client and the server that they can both understand? This has gone through a very interesting evolution. I researched it for this talk and it's quite fun.
One of the earliest wire format standards was CORBA, and what was called the GIOP, or General Inter-ORB Protocol, because CORBA was all about the ORBs, apparently. This was a binary format. It was very efficient. It had to be because when we were using CORBA to build our distributed systems, we were still dealing with 10 Mb Ethernet networks. In some places, we were still dealing with the Token Ring networks, where if somebody removed the terminator from the end of the network, then the whole thing would just stop, because the messages would go flying off the end of the network and into space. That lack of network bandwidth, and the absolutely appalling amount of error correction that was needed, because the network wires were very shoddy and people like me was soldering them. I'm not good at soldering. We had to keep the messages small. CORBA used a binary format. It was very complicated and ideally you used a library to create it. You can see the general outline of a CORBA message, or a GIOP message. Then this was sent over IIOP, which was a TCP/IP based protocol for two ORBs to talk to each other in a distributed CORBA system.
Obviously, debugging this was quite difficult. If you used Wireshark to attach to the network and look at the messages that were being sent, you'd go, "I have no idea. I don't know what that is at all." We decided that it was very important that we should be able to read the messages being sent between these machines, as well as the machines being able to read the messages. Network bandwidth went up a bit. We got 100 Mb networks. We grew complacent, and said, "We can use text to talk between systems." We invented XML.
One thing I will say about XML. This is a current piece of XML that I invented from a current system that doesn't really exist. It's still version 1.0. How many things after nearly 25 years are still 1.0? They did a really good job with XML when they spec'd it out. XML was the first common human-readable format for a given value of human. We could use XML to communicate between systems and everything was fine and great. Interestingly, for me, the XML 1.0 specification came out on the 10th of February 1998, which was my 25th birthday. When my kids ask me how old I am, I say I'm 25 years older than XML. They say, "Dad, you're such a geek." Of course, XML, it's almost perfect. It's wonderful. As long as you've got the network bandwidth, it's easy to read for a human. It's easy for a computer to parse. It was generally quite nice and easy. Of course, we had to improve on it. That was how we got SOAP.
This is SOAP. SOAP essentially wraps the XML with a bunch of things that say SOAP. Also, adds namespaces into the mix because the XML people were very clever. They went, "What if two people have two different XML objects, and they both have a description field? We need namespaces so that they can say this object has this description field, and this is this namespace." That was the other thing that SOAP made compulsory as part of your XML was an absolute shedload of namespaces. I think part of the reason I dislike SOAP so much is because working with namespaces using the XML libraries in .NET is not fun. It just isn't. The other thing, of course, with XML is that there were a few special characters, the greater than, or less than symbol, the apostrophe, the quote, and so we had ampersand, Apple's semi colon. What was a single byte character became 5 Bytes over the wire.
Then JSON came along, and JSON, like I said earlier, Douglas Crockford. The idea was that it is just a JavaScript object. JSON is great for moving data across, and particularly, for moving data between web servers and browsers. Browsers can understand it easily. They can process it easily. We decided, never mind XML, we're going to use JSON for everything. We're going to use JSON for our configuration files. JSON doesn't support comments. Never mind, why would we have comments in our configuration files? That's just insane. Of course, the other fun thing with JSON, dates. No standard way of representing a date in a JSON object. Douglas Crockford, he came up with the idea in 2001. The json.org website went online in 2002. It just took 11 short years for it to become an actual ECMA standard. From 2002 to 2013, if you downloaded a JSON package for your preferred development system, it was 50/50 whether it was going to work with the JSON coming from the other system that you were talking to.
Now we have something called Protobuf. Protobuf is Google's format for messages. Google created this in 2001. Google were effectively one of the first organizations to build a massive distributed system. The Google engine itself, the search engine was a massive distributed system. Obviously, the MapReduce algorithm, you had a bunch of servers doing something and then aggregating those results, and sending it to a smaller set of servers, and then another smaller set of services. Then eventually down to the browser, where it became a series of advertisements.
Even though we were up to gigabit networking at this point, and even though they had tens of thousands of machines in their data centers, the amount of traffic they were parsing in between the services meant that they had to make it as small as possible. This is not actually Protobuf. This is how you define a message in Protobuf. This is how you would define the book message from that XML. Once you've run the Protobuf compiler, and it generates the stub classes for you. Then it produces the actual wire format. Protobuf looks like this. That is a hexadecimal representation of exactly the same data that was in the XML and the SOAP message that I showed you earlier. This is 101 Bytes. The equivalent SOAP message was roughly 600 Bytes. It's an awful lot smaller. It means you can send six times as much traffic over the same network. The other nice thing about Protobuf, I said, XML is easy to parse. Try writing an XML parser sometime. It's not fun. Protobuf was designed so that encoding and decoding the message is efficient as possible, regardless of the language that you're doing it in. Whether that's Python, or C#, or Java, or C++, whatever it is, it's designed to use a minimal amount of CPU power to actually handle the encoding.
There are some other wire formats which are in use at the moment. There's Thrift, which was open-sourced by Facebook in 2007. Thrift is a very similar alternative to Protobuf, but is more RPC oriented.
There's Avro, which came out of the Hadoop project, which is somewhere between Protobuf and JSON. Avro messages actually include the schema for decoding that message inside there. Whereas both Thrift and Protobuf, you have to have generated a class that understands the message. It's very difficult to reflect over it.
We have MessagePack, which I really like. MessagePack is very fast, very small. It's basically JSON. It's JSON encoded in a different way. Instead of actually having the field name, you have a byte. Both the client and the server know what that byte means. Goes, "This is the description field. This is the title field. It's a string. It's going to be this long".
I have no idea what BERT is. I found it while I was doing my research. It's got the best name though. It's actually the Binary ERlang Term. It's how Erlang distributed systems speak to each other.
Of course, BSON, which is just Binary JSON. That's the internal format used by Mongo. It is a standard. You can use it to communicate between systems if you want to.
Protocols
Then, there are two components to this, one is the wire format. How do we encode and decode the data at the client and the server end? The other is protocols. When we first started talking between systems or networks, the protocol was TCP/IP. It was sockets. You would just open a socket at your end. You would say, "Can I have a socket on that machine?" That machine would say, "Yes, I am now listening on this socket." You would send data backwards and forwards over that socket connection, and everything else was down to you. You would get a packet at a time. If the packet didn't contain your entire message, you would have to buffer that packet and you have to deal with framing yourself.
TCP/IP actually started development in 1973 like me. ARPANET migrated to it on January 1, 1983. We've had TCP/IP since then. It still runs the entire internet today, and will do for the next couple of years. Then we're all going to go to HTTP/3 over-QUIC, which is UDP, which is unreliable messaging. That's going to be fun. Yes, TCP/IP is raw. It's fast. It's painful. If you can handle the pain, it's the most efficient way to parse data between two systems.
Then we got HTTP 1.1. This was Tim Berners-Lee contribution to the world, HTTP, really easy to understand. You can actually look at an HTTP request, and you can tell what's going on. You can understand it. It's quite efficient. It can handle binary data. It can do all things. It takes care of that framing problem for you. The HTTP headers, ideally, if you've written a good API, and you have well behaved clients, will have a content length header. You can actually go, "From this double new line here, I need to read for this many bytes." Unless somebody has told you how many characters there are in the string, rather than how many bytes the UTF-8 encoding requires.
HTTP made it very easy for us to build APIs. We needed a standard way to actually represent those APIs, which was where everybody got completely nuts about REST and hypermedia. It was just, yes, I can send data over HTTP, but I don't know if you're going to be able to understand it. What would be really nice is if I could send you an HTTP request that I know about, and you could send me back the data along with some information about some more HTTP requests that I could make to do things with that data. This was API on steroids, if you did REST properly. I saw this referred to as HATEOAS, Hypermedia as the Engine of Application State, which was Level 5 REST, or something.
GitHub's API has always been very good at this. If you go to GitHub's API, and say, "Give me a repository. Give me the data about a repository." It will send you back all the information about that repository but embedded in there are URLs that said, "Here is how to get the code. Here is how to get a list of the files that are in this repository." You could write your client to understand that those URLs are in the text. Then use those to do further requests, which meant that GitHub at their leisure could change those URLs anytime they wanted to. As long as the object had the new URLs in it, then your software would still work. Don't get into versioning mode. That's a nightmare, versioning REST. There's a debate on how to version your REST APIs on Google Groups that has been raging since the year 2002.
NETTCP, so the WCF people in the room may have used NETTCP in the past. NETTCP was Microsoft's proprietary networking protocol for two .NET systems to talk to each other over a TCP/IP connection. It was insanely fast. It's a binary format similar to Protobuf or the CORBA object format. It would maintain a persistent connection. The server could say things to the client without being asked, very powerful and only worked between .NET systems. Microsoft actually published the specification for NETTCP, how the protocol worked. How the encoding format worked. Went to the Java people, which at the time was Sun Microsystems, and said, "You can make your Java systems .NETTCP." The Java people went, "Yes. No." It was binary. It was very fast. It only worked with Windows Communication Foundation. This was Steve Ballmer-era Microsoft. Of course, it was proprietary and closed-source.
Meanwhile at Google, for their internal systems, they were using Protobuf to communicate between those systems. They created a protocol called Stubby, an RPC framework called Stubby, which was used to actually handle the communication between those things. It was called Stubby, because you fed in a Protobuf file and it would generate stubs. Those stubs were either a client or the base classes to implement a server in C++, or Java, or Python, which were the three languages in use at the time. It was completely internal. They built it for themselves. It never got open-sourced. It would not have been fun. It was binary. It was fast. It was internal to Google.
gRPC
It was when HTTP/2 came along, that a lot of the code that had gone into Stubby had actually informed some of the design of SPDY, which was Google's HTTP replacement. Which everyone went, "Google Chrome, Google running the internet? It's not good." Actually, "No, that is quite good. Ok, let's adopt that." Then SPDY became HTTP/2, which meant that the internet protocol that everyone was using, now had an awful lot of the code that had come out of Stubby to do with multiplexing, parsing messages backwards and forwards, maintaining persistent connections. HTTP/2 has all that stuff. Google rebuilt Stubby on top of HTTP/2, and they called it gRPC. The G does not stand for Google. No, it doesn't. It stands for a different thing with every different version of gRPC that comes out. It stood for glorious, ginormous, grand, general, but definitely not ever Google. gRPC is binary. It's fast. It runs over HTTP/2. It was open-sourced by Google in 2015. Now we can all use that. I'm going to show you a bit of gRPC in a moment.
GraphQL
The other thing that we've got now is GraphQL. I think somebody is going to be talking about GraphQL today. GraphQL, it's an evolution of HTTP APIs, works particularly well for browser clients talking to servers. One of the issues with a standard RESTful API is you might go to GitHub, and say, "I want to know who owns that repo." GitHub will say, "Here is all the information about that repo." You just throw most of it away and go, "This is Dave's repo. Now I know who to go and hit." GraphQL attempts to address that. GraphQL was open-sourced by Facebook in 2015, and is now the standard way of interacting with the Facebook API. It works really well for Facebook. Because if you just want to know someone's relationship status, which let's face it, is what most people use Facebook for. You can just send a request to Facebook and say, "What's that person's relationship status?" It will just send that one word back. GraphQL is very good for public APIs, for allowing people to request exactly the shape of the data that they want, and to express complicated queries. You can send a request that says, "Give me all the people in the staff database whose job title is something to do with HR".
I believe the idea with GraphQL is you could tell it to serialize as XML, but everybody does it with JSON. It's very efficient, because you get to specify which fields you want. It's very flexible for the people who are consuming it. If you are building public APIs, and you want to give people that flexibility, then GraphQL is probably the current state of the art.
RSocket
There's also something new called RSocket, which I've been watching with interest. RSocket came out of Netflix, in 2015. Who does reactive programming, so observable streams, or Rx, or Angular? If you're using Angular, you're doing reactive programming, whether you know it or not, because underneath Angular is all RxJs, and everything else. RSocket takes that idea of reactive programming and applies it over your network. It's a lower level protocol than gRPC. It's OSI Layer 5 or 6, whereas gRPC because it's over HTTP/2 is OSI Level 7. RSocket, you create a stream between two machines. You can send messages either in one direction or both directions over that stream. When you're dealing with those messages at each end, it's up to you how you deal with those. You can use a reactive framework, an observable, or something else. RSocket uses reactive stream semantics. It's lower level, and there is an RSocket RPC framework on top of that, which turns it into something like gRPC.
WCF, I have been building for the last year, in my spare time, a product called Visual ReCode, which Microsoft announced a build last year that they had finished adding new features to .NET Core, which is the new open-source cross platform version of .NET. That in future .NET Core, was going to be called .NET. The next version of .NET Core would be .NET 5. Thousands of people in enterprises who had built massive distributed systems using WCF, said, "You haven't ported WCF to .NET Core." Microsoft said, "No, we haven't used gRPC." The world went, "We can't use gRPC, everything's in WCF." WCF is very structured. You have interfaces, and contract declarations, and everything else.
I did an experiment to see whether I could use Roslyn to read the WCF code and generate the equivalent gRPC code. It turned out I could. Now that's a Visual Studio extension, and it's going on sale hopefully this week. I've just finished the trial thing. Writing trial mode in software is really annoying. I'm going to cripple my software until you pay for it feels very mean. The thing with WCF, it will allow interop with other systems written in Java, or Python, or C, or C++, but only if you used SOAP. The NETTCP encoding was proprietary and only worked between .NET systems. It meant that you had to write your server in C# or VB.NET. There was not a lot of flexibility in there. You could use WCF to generate a client to talk to another server that was written in Java, as long as it published a WSDL file. You could only run your WCF services and your NETTCP WCF clients on Windows because this was 2000's Microsoft. It generated stub classes either from WSDL, or if you're using NETTCP, it would just go in and look at the code.
gRPC, which is the modern alternative and the one that Microsoft are recommending. Its wire format defaults to Protobuf, but you can customize it. If you want to parse MessagePack messages over Protobuf, you can do that over gRPC. You can do that. Don't because it's insane and Protobuf is fine. Why would you want to use anything else? You can if you want to. gRPC supports C++, and Java, and Python. It's completely cross platform. All of these systems can interop with each other. All of them, you can generate a client. You can generate the base classes for a server using the Protobuf compiler, or for some languages, there are third party open-source compilers.
Down the left-hand side is the official supported one. If you go to the gRPC repository on GitHub, all those languages have official first party support. Then down the right-hand side, you've got all the third party people. There's a Rust implementation, a Haskell implementation. There's even a Perl implementation, if you really must. I believe it's just one massive regular expression. Don't ask whether it works in Perl 6.
WCF Example
This is a WCF contract. With WCF, the whole idea was that people who knew C# should be able to do everything in C# and never have to learn anything else, which describe C# programmers really well. What we're used to, we've got a lot better. No really, we have. You had data contracts. These defines your messages. You had data contracts there like that. Then you had service contracts. Your service contract was literally a C# interface. Then you would create an implementation of that interface. We got service contract, and operation contract, and everything else. Everything outside of the WCF code was actually just your business logic. You would map it up to that.
Fundamentally, when you define an API, 90% of the code in that API is going to be your code that actually does stuff. Only 5% of it is going to be the bit that actually either communicates with another API, or exposes your service as an API. If you mix those two things together, then you're doing it wrong, which I think is the big takeaway. WCF made it very easy not to mix those two things together, which made it very easy to write a system that would take that WCF code and turn it into the equivalent Protobuf code. That is literally the exact Protobuf representation of the service contract that you saw previously. I have successfully managed to write a system that takes a WCF API and turns it into a gRPC API. Having done that, I'm moving on to taking a WCF API, and turning it into a JSON API with Swagger, and everything else. I'm doing a panel later on where I believe we'll be having a brief discussion on this problem of API standards changing over time and how you can migrate from one to the other with minimal disruption to your business.
Future of the API
The future of the API. The things that are going to be affecting this in the very near future. We now have 5G networks. Those are starting to roll out across the world. Mobile devices, we're going to have more connectivity. We're going to have more services being used, more APIs being built and consumed. Satellite internet, Elon Musk is gradually blanking out the night sky with his satellites, which are going to introduce latency issues and everything else. We've got a whole swathe of new devices coming along. I've got a smartwatch on here that's talking to APIs on my phone over Bluetooth. Mixed reality, we're all going to be walking around with augmented reality sunglasses on in 10 years time. Those are going to be constantly communicating with various servers to overlay advertisements on the real world. We've got voice APIs. When you talk to Alexa or Google Home, you're effectively interacting with an API using your voice. We got the Internet of Things, which is lots of tiny little devices with very low power running on batteries the size of a pound coin, and we have those talking over APIs to the cloud. That's something we've got to deal with as well.
This is where we are. This is where we're going to be going. If you're not building APIs right this moment, you are probably going to be very soon because it is becoming the standard way. Someone will create a user interface, but the majority of us will be creating APIs for those user interfaces to consume. That is the end of my brief history of the future of the API.
See more presentations with transcripts