BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Using OpenAPI to Build Smart APIs for Dumb Machines

Using OpenAPI to Build Smart APIs for Dumb Machines

Leia em Português

Key Takeaways

  • API description specs like OpenAPI are a critical tool you should be using to shift the burden of managing, documenting, and executing on APIs to computing machines and off developers as much as possible.
  • OpenAPI 3.0 now allows for additional expressivity that can let machines do even more useful work for us.
  • OpenAPI can drive powerful testing automation, it can be used to produce realistic mocks, and it can even generate native bindings that abstract away complexity from the developer.
  • There are underutilized benefits of OpenAPI, like links and callbacks, that you can leverage to get developers out of docs and discovering through code.

This is the API era. Even non-tech companies (if such a thing still exists) treat APIs as a key product. Increasingly companies use APIs as an organizing thesis, the basic unit by which different teams share their work and communicate. Many seek to emulate the success of Amazon whose relentless ascent has been fueled by APIs, internal and external. In the 2020 remake of The Graduate, the one word of advice to a digitally recreated young Dustin Hoffman contemplating his future would be “APIs”. So we’d expect OpenAPI to be a chocolate-peanut butter blend of two things we love (mmmm… open). Unfortunately, this is the most compelling one-line description of OpenAPI I could dig up: "a specification for machine-readable interface files”. Underlying the snoozer of a tagline is some extremely useful and pragmatic tech. Yes, it lets you describe your API in a way that machines can consume, but the stuff the machines can do is so incredibly useful for the teams building APIs as well as the software developers who use them.

Eager learners

When I was a kid API references were bound in books, and I would devour them. The Be Developer’s Guide, Palm Programming, Java 3D API Specification—yes, yes, and yes. Tim O’Reilly, take my money. Volumes like those were how you’d learn, not just generally about the systems or platforms you wanted to manipulate, but also the nitty gritty details of how to make it happen, the API reference. That material has predominantly moved online, and we’ve realized it takes a village to teach even eager learners. Educating the folks we want to use the APIs we build is just as important as building them.

The term “API” covers a lot of ground so to hone in a bit, I’m going to focus exclusively on HTTP-based APIs. I’d call it REST—and many do, incorrectly—but REST is actually a bit more narrow, and a subset of web APIs. This isn’t to minimize the importance of other APIs (like the ones you might find in an SDK), but Web APIs are proliferating at an unprecedented rate, and the APIs in our private networks look more and more like the APIs for cloud services on the open Internet. I’m also not talking about antiques like WSDL, or the new hotness like GraphQL (though I’ll touch on it later). Just the plain Jane APIs that almost every SaaS vendor publishes.

Rather than the interface to APIs living exclusively de facto in the code, many developers latched onto the need for generating a programmatic description that would feed into docs, registration, code gen, etc. OpenAPI isn’t the only spec for describing APIs, but it is the one that seems to be gaining prominence. It started life as Swagger and was rebranded OpenAPI with its donation to the OpenAPI initiative (thank you, SmartBear!). RAML and API Blueprint have their own adherents (but they seem to be in decline). Other folks like AWS, Google, and Palantir use their own API specs because they predate those other standards, had different requirements or found even opinionated specs like OpenAPI insufficiently opinionated. I’ll focus on OpenAPI here because its surging popularity has spawned tons of tooling (including our own SQL layer for APIs).

The act of describing an API in OpenAPI is the first step in the pedagogical process. Yes, documentation for humans to read is one obvious output, but OpenAPI also lets us educate machines about the use of our APIs to simplify things further for human consumers and to operate autonomously. As we put more and more information into OpenAPI, we can start the shift the burden from humans to the machines and tools they use. With so many APIs and so much for software developers to know, we’ve become aggressively lazy by necessity. APIs are a product; reducing friction for developers is a big deal.

OpenAPI 101

An OpenAPI file can be specified in JSON or YML; here’s a snippet from the Strava OpenAPI doc:

  "paths": {
    "/athletes/{id}/stats": {
      "get": {
        "operationId": "getStats",
        "summary": "Get Athlete Stats",
        "description": "Returns the activity stats of an athlete.",
        "parameters": [
          {
            "name": "id",
            "in": "path",
            "description": "The identifier of the athlete. Must match the authenticated athlete.",
            "required": true,
            "type": "integer"
          },
          {
            "$ref": "#/parameters/page"
          },
          {
            "$ref": "#/parameters/perPage"
          }
        ],
        "tags": [
          "Athletes"
        ],

You can write documents with tools (or by hand) or generate them from the code in pretty much any language. Here’s an example in Java that includes OpenAPI annotations alongside JAX-RS annotations.

@GET
    @Path("/{username}")
    @Operation(summary = "Get user by user name",
            responses = {
                    @ApiResponse(description = "The user",
                            content = @Content(mediaType = "application/json",
                                    schema = @Schema(implementation = User.class))),
                    @ApiResponse(responseCode = "400", description = "User not found")})
    public Response getUserByName(
            @Parameter(description = "The name that needs to be fetched. Use user1 for testing. ", required = true) @PathParam("username") String username)
            throws ApiException {
        User user = userData.findUserByName(username);
        if (null != user) {
            return Response.ok().entity(user).build();
        } else {
            throw new NotFoundException(404, "User not found");
        }
    }

The obvious output from OpenAPI is documentation. An obvious benefit is that (with a reasonably smart workflow) things stay up to date. Out-of-date docs are equal parts embarrassing and enraging. But OpenAPI lets your docs get a lot fancier. Rather than just describing the ins and out of the API, you can add useful components such as an interactive explorer, or automatically generate change logs.

Moving farther away from direct human use, OpenAPI can drive mock servers that publish APIs based on what you’ve described. Those mock APIs can respond according to the schema in the spec as well as with specific examples also encoded in the spec. This lets your internal teams start cranking before the API is fully built and it lets external developers test programmatic use of your API without spamming your servers (or before they’ve gained authenticated access).

One of our earliest uses of OpenAPI was to generate native code bindings. In our case, we generated TypeScript bindings for our front-end to interact with our backend. This moved the API learning process out of the code and out of docs and into the IDE. Rather than looking at the server code to figure out how it worked, we could lean on the editor to show us the interface for various APIs including proper types. Publishing an OpenAPI spec for your API lets the developers learn about it using code exploration techniques (^] if you love Vim).

OpenAPI 3.0

The OpenAPI Initiative published version 3.0 a bit over a year ago. It includes some pretty cool, but still underused features. Most importantly, it expanded its ability to describe APIs. Swagger started out as an intentionally opinionated subset--what should be done rather that everything that could be done in terms of specifying and parameterizing APIs. It would be great to see OpenAPI continue this progress in subsequent versions. Version 3.0 also introduced two cool new pieces of metadata: links and callbacks.

Links. Often the results of an API call can be used as the inputs for another API call. You’ve probably even seen APIs that include literal links in their response bodies. The links feature of OpenAPI adds static metadata that describes linkages between different APIs, how you might use the output of one API as the input for another. This is inching into GraphQL territory which explicitly encodes the linkages (edges) between entities. API spec writers and tools don’t seem to yet be making great use of links, but there are a ton of possibilities: hyperlinked docs, interactive API explorers, or automatic GraphQL translation layers. It would be great to see more OpenAPI document using links and better tooling to specify links.

Callbacks. When registering a webhook, you’ll typically pass in a URL as a string. The service will then invoke that API. OpenAPI 3.0 lets you describe the signature of the callback, the parameters that it should expect. Again, extremely helpful for getting developers out of docs and discovering through code--where they want to be in the first place!

More

Adoption of OpenAPI lightens the load for API creators trying to efficiently educate their users, however, it’s most effective when it lets developers not just learn better but learn less. There’s such a scrabble for developers’ attention that when we have it, we want to educate them about important topics rather than automatable logistical details. There’s more that OpenAPI could do to focus on educating the machines developers use rather than the developers themselves. Consider pagination for example. Here’s how the Google Calendar API teaches users to paginate through calendar events:

inputs:

pageToken

string

Token specifying which result page to return. Optional.

outputs:

nextPageToken

string

Token used to access the next page of this result. Omitted if no further results are available, in which case nextSyncToken is provided.

Now for a person—reading carefully—it is discernible that we should take the output from nextPageToken and plug it into the pageToken input for each successive call, but there’s no way in OpenAPI (or in Google’s proprietary discovery document format) to express that semantic.

Another common pattern where OpenAPI could expand its expressivity involves operations that might take seconds or more to complete. Consider creating a virtual machine instance in a public cloud. One call kicks off the process of creating the instance; another is required to determine when the instance is up and running. From the AWS documentation for EC2 RunInstances:

An instance is ready for you to use when it's in the running state. You can check the state of your instance using DescribeInstances.

It would be a big quality of life improvement for developers if the API spec encoded those semantics and the tools they use could abstract them away. Meticulous, machine-consumable details so the human developers can learn less.

Wrapping up

If you’ve built an API or if you’re building a new API, start using an API description spec. OpenAPI is the increasingly popular choice, but if that doesn’t work for you still pick something (and let the OpenAPI folks know why it didn’t work out). The more you can stay on the beaten path, the greater you’ll find the benefits from the ecosystem of tooling.

How to start using OpenAPI (or the like)? The process by which you describe your API runs into a surprisingly contentious choice. The true die hards ride into battle under the rallying cry of “contract first!” To them, the API spec should be where API projects start; from there you can generate stub code, clients, etc. More pragmatically for extant APIs, there are tools for common web frameworks to extract specs from code (in some cases assisted by additional annotations or decoration).

Whether contract-first or code-first it really depends on your own processes. For large organizations, contract-first might be the right way to get server and client teams unambiguously on the same page. While server code is being written, client teams can write against automatically generated mocks. For groups where client and server are developed together or where the API is the product, it may suffice for the code to be the source of truth. Similarly, for existing APIs the ship may have sailed on contract first. In those cases, you can derive the OpenAPI document from common web frameworks (in some cases with the assistance of additional decoration or annotation).

Now that you’ve got the API description, the most important thing you should do is publish it. Publish it, and keep it up to date. Take full advantage for internal use: generate server stubs and client code, build automatic mocks, and, yes, generate your human consumable docs. But by publishing the API document itself, you get to educate the machines that developers use to consume your APIs. The stubs, mocks, and tests they can generate today mean developers get to spend less time learning about the low level details of your API and more time building. As OpenAPI and its ecosystem of tools evolves, the consumers of your APIs get to learn less as the libraries, frameworks, and platforms they use get smarter.

About the Author

Adam Leventhal is the founder and CEO of Transposit. Prior to founding Transposit in 2016, Adam was the CTO at Delphix where he led development, QA, support, program management, product management, college recruiting, and advanced development teams and scaled the engineering team from 10 to over 100. While Adam was a Staff Engineer in Solaris Kernel Development at Sun Microsystems, he was one of the three authors of DTrace for which he had received Sun's Chairman's Award for technical excellence in 2004, was named one of InfoWorld's Innovators of 2005, and won top honors from the 2006 Wall Street Journal's Innovation Awards. Adam has developed various debugging and post-mortem analysis facilities, and continues his work on user-land tracing to expand the breadth and depth of DTrace's view of the system. Adam joined Sun after graduating cum laude from Brown University in 2001 with a degree in Math and Computer Science.

Rate this Article

Adoption
Style

BT