BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts API Lifecycles, Specifications, and Standards with Kin Lane

API Lifecycles, Specifications, and Standards with Kin Lane

Bookmarks

On this episode of the podcast, Thomas Betts talks with Kin Lane about managing your API lifecycle using standards and specifications, including OpenAPI, AsyncAPI, and JSON Schema. These specifications and the tooling based on them can help reduce communication problems, by creating documentation, generating code, and automating testing.

Key Takeaways

  • The lifecycle of an API needs to be managed just as an application’s software development lifecycle is managed.
  • Specifications exist to improve communication. They describe communication at the API layer, and also help people discuss and understand the API. Both technical and non-technical stakeholders benefit from standardized API specifications.
  • OpenAPI, formerly known as Swagger, is for HTTP-based APIs. AsyncAPI is based on OpenAPI, but describes publish/subscribe contracts for an event-driven API. JSON Schema describes the structure of the data being passed in requests and responses, and is used by both OpenAPI and AsyncAPI.
  • Most organizations start using API specifications for documentation, but there are many more scenarios. Tools exist to help automate validation and testing of your API.
  • Each new version of an API should have a new specification. Because two versions of the spec can be easily diffed, all changes between versions can be clearly identified.

Transcript

Intro

Thomas Betts: Hello, and welcome to another episode of The InfoQ Podcast. I'm Thomas Betts, and today I'm joined by Kin Lane, Chief Evangelist for Postman. You may recognize him from his decade of writing as the API Evangelist. Kin has worked with companies, organizations, institutions and government agencies to find the API lifecycles that have emerged across almost every aspect of our personal and professional lives.

Kin works to evangelize and move forward OpenAPI, AsyncAPI, and JSON Schema as part of Postman Open Technologies, acknowledging the critical role these API specifications are playing in our lives. Kin, welcome to The InfoQ Podcast. 

Kin Lane: Thank you for having me, Thomas.

API Lifecycle vs. Software Development Lifecycle [00:38]

Thomas Betts: API lifecycles, why do we need to think about an API lifecycle and how is that different from just part of our usual software development lifecycle?

Kin Lane: Well, it definitely maps to what is known as the software development lifecycle right now because APIs are very similar to what we're deploying when we deploy other applications. But there are nuances and things that have emerged that we need to think about a little differently when it come to designing, developing, delivering APIs and supporting them and things like security is always a good one. Security lecture, web application, security firewalls, all those traditional things apply when you're delivering APIs.

But there's some nuances that OWASP top 10... There's ways that APIs are used that you should be considering security a little differently. And there's other stops along the API lifecycle – documentation, testing and then testing doesn't work like it does with an application. You should be thinking in a more granular kind of modular contract or specific use case-based way. So there's just a handful of nuances to the lifecycle, but it definitely maps to what we know and use today when it comes to the rest of our software.

Thomas Betts: And I think sometimes APIs are thought of just an, or treated like an afterthought of your application and kind of support them. So it's really the idea of bringing that up front and saying we need to think about our API and manage the API all along the journey.

Kin Lane: Yeah. You hear a lot, "API first," is kind of the rallying cry and it's just stating that we should be prioritizing our APIs ahead of applications. Because if you think about how we got here the last 20 years is we started building websites, web applications, they got more complex and intricate. Along the way we also started delivering mobile applications and we started realizing, oh, we're using some of the same. We're building APIs. We have some redundancy here to deliver the resources we need in our web and mobile applications.

And then we realize, oh, our partners need access to some of the same things. So really API first and having a common API lifecycle really prioritizes the API and says, "Hey, let's think about the API here. Does it already exist for another application, another scenario? Can we reuse that all or partially before we go building another API?" And then we're just thinking about the security, the dependencies and the other things along the way. And then we use it in our web, mobile device applications, or other integrations.

Swagger and OpenAPI [03:04]

Thomas Betts: I think when I see API specifications such as OpenAPI or Swagger, I can't remember what the terminology is between the two, but I know people overlap them, they're very much intertwined. Maybe you can explain that. But my first encounter with that was as documentation. It was a way to say here's what my API does, and provide that to a third-party consuming it. Is it still just documentation or is that just the start and there's a whole lot more to what the specifications provide at this point?

Kin Lane: Let's start with just the basics of what is Swagger, what is OpenAPI? There are a machine readable specification that describes the surface area of our HTTP 1.1 APIs. So web APIs, REST APIs, as they're often called. And API specifications like Swagger and OpenAPI aren't anything new. We have WSDL for SOAP web services. So Swagger and OpenAPI are just that, but for web APIs. And Swagger was invented in 2010 by a fellow named Tony Tam who created Swagger so he could document his startup's API. And as you said, documentation is the number one reason.

More Than Just Documentation [04:08]

Kin Lane: A lot of folks think Swagger is documentation when and actually it was the specification. But in 2015, Smartbear who owned the specification, the tooling and everything it around it, they put it into the Linux foundation and it became OpenAPI. So Swagger is the old name of it, OpenAPI is just the new name of it. That's why a lot of people do it is to publish up-to-date accurate and interactive documentation. So you have Swagger UI, Redoc, there's quite a few that use Swagger or OpenAPI to generate documentation. But it doesn't stop there.

You can use it to generate code libraries in a variety of programming languages. So when you go to a documentation for an API, you'll see PHP, Python, Ruby, Go libraries that you can use. Those can be auto-generated from the Swagger or OpenAPI specification. So documentation and code generation are the top two reasons, everyone's switching and going to an OpenAPI driven lifecycle. You can see Salesforce is doing this right now. Stripe is doing it right now, and they're all doing it for that up-to-date docs and code generation.

But you can also use your OpenAPI to secure your API, there's OWASP tools that will take your OpenAPI and scan all those endpoints for known vulnerabilities against the OWASP top 10. You can generate tests. There's a lot of tools that allow you to generate contract tests and integration tests and performance tests using your Swagger OpenAPI. So there's a whole suite of lifecycle tooling that all use OpenAPI as that contract that describes what your API does. So it's great to have it for docs, it's essential, but it can be used for so much more.

Improving Communication [05:53]

Thomas Betts: And you said they described the contract. And I think that's one of the fundamental things about an API is it is a contract of how to communicate into your service. And I don't know if I've said it on this podcast, but people have heard me say before that almost every problem in software comes down to a communication problem, whether that's the ability to send, receive and understand a message. It can break down whether it's client to server, or product manager to developer trying to understand requirements. Anytime you have two things trying to communicate, explaining the contract is really important.

So how can API standards and specifications help us address those failures to communicate? Is it just between the client and the server, or does it help the humans as well?

Kin Lane: The documentation is the first kind of representation of that. "Hey, we're going to generate this human readable documentation for the humans who are onboarding and using this." So that's the first step in acknowledging that we need to communicate, communicate around how we're going to communicate. This is what the API does, here's the contract.

But OpenAPI provides the ability to describe the protocol. So HTTPS, we're using HTTP as a transport. Here's the domain or the subdomain that you should be calling and here's the variety of paths. So /V1/images or /V1/CRM, whatever the resource is. And then it describes the query parameters. It describes the headers. It describes what the request body could look like.

And then it's got the details of the response, here's what you can expect back. Here's the status codes you can expect back. Here's the media types, application/JSON, XML. And then here's the actual schema for the payload that you can expect back. And so, it's really all those details of that contract that you described and what we can expect when we use this API.

But it also, because of semantic versioning or base versioning, you can describe that contract over time. So here is the contract today, here was the contract yesterday, here it is for the next iteration. And so, these machine-readable contracts that produce human-readable artifacts allow us to have an ongoing conversation, but a very precise one because we have service-level agreements. We have business arrangements between the producer and consumer here. That OpenAPI makes sure we're all on the same page when we're making these APIs available, but also how we're evolving and iterating on them.

AsyncAPI is OpenAPI For Event-Driven APIs [08:18]

Kin Lane: And then I'll add, I mean, we're talking about OpenAPI, but AsyncAPI is a sister specification to OpenAPI that's emerged in the last couple years. And it does the exact same thing, but for our event-driven APIs and multiple protocols. So we can do web sockets over TCP. We can do NATS, we can do AMQP. So it gives us a wider contract vocabulary for describing the surface area of not just our web request and response APIs, but our event-driven APIs over a mix of protocols.

Better Naming Suggestions [08:54]

Thomas Betts: There's a lot of there that I want to come back and touch on. But one of the old jokes in, that the hardest thing in computer science is naming things. So domain-driven design advocates will say, "You should use a ubiquitous language, but that's within your bounded context." The API is the surface area of your bounded context. Do you have any recommendations for language and naming conventions that people should follow when they're talking about their API since it is going public out to the world?

Kin Lane: Your OpenAPI, the scope, the naming, the design of your OpenAPI is going to be a reflection of that domain, that business domain. So you'll come across a lot of really big OpenAPIs out there. They're pretty chaotic, pretty verbose, pretty messy. That's a reflection of the business operations behind those APIs. When you come across much smaller, more precise OpenAPIs, that one area does one thing well, a microservice, a smaller API that is well designed, uses a common vocabulary for the parameters.

It has governance applied. Those are ones that you see have gone through a more domain-driven design kind of approach because they've done a lot of planning and thinking about, not just what API should look like, but what's the domain these APIs are delivering services in. And so, you can really see that in the OpenAPI. And a lot of the messier, more chaotic ones, the planning hasn't been there.

And so, domain-driven design, event storming, these are all ways of this planning process that will define how you use the OpenAPI specification, how you name your paths, how you name your parameters. The number of paths that are available in your OpenAPI, what properties are available for your schema.

So the contract language that you're using in your OpenAPI to describe your API is going to be very much defined by your domain-driven design, discipline and practices.

Relationship Between OpenAPI, AsyncAPI, and JSON Schema [10:51]

Thomas Betts: I do want to go back. I think you mentioned two standards, OpenAPI and AsyncAPI and there's a third that I've heard you talk about which is JSON Schema. Can you go a little bit more deep into what are they all used for? And maybe start with what's the current state of them since some of them have been around a lot longer than others?

Kin Lane: Yeah. So OpenAPI and Swagger, definitely have been around just slightly longer than JSON Schema. But JSON Schema is really important. It's a little more quiet, a little more silent behind the scenes but it's been around for around a decade, I would say. And JSON Schema is what you use to describe the objects that you're passing back and forth as part of API communication. So if you're posting a JSON object, that can be described as JSON Schema. That JSON you get back, it can be described using JSON Schema. And JSON Schema just gives us the ability to describe the properties of that.

What are the names of the different entries in our object? Are they text? Are they numbers? We can get pretty precise in what they are. And that's used for validation. So JSON Schema is how we describe the objects, and then we validate that they're correct. It didn't have a misspelling in the parameter name. It didn't have extra parameters it shouldn't have or missing parameters that it's supposed to have. JSON Schema is how we validate that.

Now, OpenAPI is how you describe access to those objects. So OpenAPI actually uses JSON Schema to describe the request and the response payload, but OpenAPI provides a wider vocabulary to describe the paths, the query parameters, the other details of that access via HTTP for our APIs. But you'll see when you're using OpenAPI, one of the properties is called schema and that is JSON Schema. It's a vocabulary of JSON Schema meant just for APIs, but it is JSON Schema. And that's what allows us to then when a API request is made, we can validate against the OpenAPI contract. That's what makes it contractable is we can actually say, "Here's what it should be."

And then using JSON Schema we can validate that each request and each response meets that contract. When I say contract testing, that's what I'm saying is being done in the technical details. We're actually validating the request and the response of every API call to make sure that they are valid. And that's how we ensure the reliability, quality of APIs.

Now, AsyncAPI is a sister specification, but it's for event-driven APIs. It also uses JSON Schema for it to describe the payload, but that payload are messages that are being published and subscribed to the API.

So instead of request and responses, we're publishing and subscribing two different topics or channels. But those payloads, those objects that are being published or received on subscription, those are described as JSON Schema. And it's the same thing, so that we can validate that what's being published or subscribed meets the AsyncAPI contract which can be HTTP, HTTP 2 or 3, TCP. It could be a wide range of protocols. But JSON Schema is central to both of those API specifications in helping us define the objects and then validate the integrity of those objects.

Maturity Level of AsyncAPI and OpenAPI [14:13]

Thomas Betts: How long has AsyncAPI been around? And where is it on the maturity scale compared to OpenAPI?

Kin Lane: 2016 going into 17, I think is when it first emerged. They just tweeted out their first pull request on it, I forget when it was, but it was around then. And it very much followed the same approach and vocabulary as OpenAPI on purpose so that people would adopt it. But in the last year to two years, it's really kind of exploded in its adoption and usage.

And it's becoming a favorite for describing our event-driven APIs, which are used by organizations that are usually a little further along in their API lifecycle. They have more consumers, they have higher volume API calls being made, request and response begins to fail them. And they need a pub/sub, published/subscribe that's a little bit more high volume and meets their needs.

Thomas Betts: And I had Fran Mendez, who created the AsyncAPI on the podcast a few months ago and he was talking, I think at the time we were recording whatever version had just come out, version two. And they were working on version three, and version three was going to be some breaking changes. And I know OpenAPI is a few versions ahead, but it's following a very similar structure of, okay, these are going to be the big changes. We're going to do it on a quarterly basis for minor changes. And then you'll have breaking changes in the future that you just have to be aware of if you adopt a new spec.

Kin Lane: Yup, very thoughtful.

API Specifications Are Not a New Idea [15:32]

Thomas Betts: And you talked about, all of this is JSON-based and JSON Schema, but these aren't new ideas. You mentioned WSDL. I remember doing WSDL. And it was XML and WS-* and all that going around it that would specify all the contracts and the security. And it was a DTD or an XSD that you had to write instead of your JSON Schema.

Thomas Betts: So it seems like people like JSON because it was schema-less, but then we're passing it back and forth. We had to define the contract. And so, you can still use JSON in your code without having that contract. But once you get to this two services need to talk to each other, two programmers need to talk to each other, the specification really, really becomes critical.

Kin Lane: This is nothing new and things happen in cycles in the industry. And you definitely just dated yourself, Thomas, with WS-*, as far as how long you've been in the industry.

Thomas Betts: I couldn't help it.

Kin Lane: But we're doing some things new and a little more inclusive. I would add that these are often described in YAML as well, because so much of our ops world these days, because of Kubernetes and others. And because we have more business stakeholders involved in the conversation, you see a lot of YAML. And your OpenAPI or AsyncAPI could very much just be described in any YAML format. But oftentimes the payloads are still very much JSON or XML.

Kin Lane: I'm actually going to be doing some work with the AsyncAPI community to define XML as part of it and how we can better support XML and the whole EDI SOA realm, and make sure that these newer systems are supporting the past in a sensible way.

Thomas Betts: I know bringing up XML usually dates me pretty well. I had to work with HL7 back in the day, which was a healthcare format and it was XML. I know they've now adopted OpenAPI specifications. Again, here's the standard way that you've about healthcare data. And there's other industries that have published, these are the standard ways to talk about the stuff they have. Their own industry decide those things.

Kin Lane: YeS.

Versioning an API [17:22]

Thomas Betts: You mentioned at one point versioning. And I think this is one of those common challenges people have with their software and especially with an API where you have external consumers and you really have to worry about backwards and compatibility and breaking changes. Is there anything that using the specifications can help you bake in versioning from day one and then adopt that process so you don't have to think about, "Oh, we need to introduce new things? How do we suddenly become version two, since we never even talked about version one as our product?"

Kin Lane: The most common way for folks to handle change management using versioning like this is with semantic versioning, where you have a 1.0.0 and then that's a major release. And then when you release any changes that are minor changes, feature enhancements, you go 1.1. If you do a patch, if you're needing to fix something it'd be 1.1.1. So that last digit is just for a fix a patch, something you're making better. That middle digit is for incremental changes. But the first one, so if we went to version 2.0, that's a major change. And so, signaling that, "Hey, we have breaking things happening here."

And OpenAPI and AsyncAPI and JSON Schema help us communicate around this change. So an Asyncer or OpenAPI, they have a version property saying, "Here's the version of the API." Now, that may not be semantic versioning, it could be date-based versioning like Twilio. They have a date in their URL that says this was released on this date, but that signals here's that contract at that moment in time.

When we iterate or evolve that contract through a minor patch or a major, we create a new OpenAPI. We put that version in there and that has the changes. And what that allows us to do then is you a diff. Check the difference between those. Now we have each moment in time, each contract, we can take two moments or more and do a diff between them and then validate is this truly only minor changes? Or is there a breaking change in here that we need to think about?

And then the OpenAPI... And again, back to our earlier discussion around is it just used for documentation? This process is how you can generate a roadmap. You see companies doing the diff, we're preparing a new OpenAPI. Here's our old one that's currently in use. We do a diff. And then that listing of changes, that's our change log. That's what we're actually producing. And so, you're actually able to use it to kind of manage your change over time as a team, does this match what our roadmap says? Did we make the changes we said? Did anything get snuck in? And then more importantly, as you said, is anything a breaking change?

And so, these contracts, these specifications allow us to better manage and better communicate change. But also those specifications, as you alluded to, they use versioning to communicate change. Swagger is 2.0, OpenAPI is 3.0. So when it went to a major release, it became OpenAPI. And so, these specifications and AsyncAPI just finished their 2.0 and they're working on their 3.0 specification right now. So this is how we manage change for the specifications, but also the APIs they describe.

Thomas Betts: And so, if I'm implementing my API and I'm publishing out there, so the version I'm working on is version two and version one's already out. Is it standard to have that version number be the URL or something to distinguish you're calling the old version or versus calling the new one?

Kin Lane: In the API space there's a lot of dogma and belief systems. And there's a group of people who believe that you should not have your version in the URL or in the path. But the reality on the ground is that majority of APIs do put the version in the path. So you'll see api.twitter.com/v1, and then the resources: tweet; accounts. And that's a pretty common way that API providers do it.

Now, the second most common way and the way that many API folks say you should do it to be more in line with how APIs are developed is in the header that your version should be in the header because that's describing the transport and how your APIs are working.

And so, either in my book, stick with what makes sense to your team, stick with what makes sense to your community. But, there's actually another group of people who believe versioning isn't necessary. So I'm not saying you have to version, I'm saying it's sensible to version. It allows you to communicate change within your team and with your consumers. And if you're going to do it, make sure you communicate it and it's kind of the north star for how you iterate and evolve and communicate around change that occurs.

Thomas Betts: I think you just keep going back to my point that communication is usually the thing that falls down. It's always a communication problem. So someone's calling you and they haven't updated to the latest spec and you've moved on. That could be one reason why.

Kin Lane: That's the fundamentals of breaking changes, breaking changes aren't bad or good. They are only bad or good if the communication is lacking or the communication is there. So it's precisely what you said, communication is really what it's all about.

Validation and Testing [22:36]

Thomas Betts: Now, you mentioned, a few times, validation, but I want to talk about testing. So moving on from just validating that the API does what it says it's going to do. What are some of the good use cases you can use for testing the API? Is it something that clients and servers can implement and get benefit out of?

Kin Lane: So the OpenAPI can be used, as we said, for documentation or code gen. Those are the most kind of visible consumer facing aspects of the API lifecycle. So that's what OpenAPI and these specifications are mostly known for. But in that contract it says what response status codes I can expect with an API. And now that allows me to test for those and auto-generate tests to describe that. So for every path, I should be able to use the OpenAPI to generate a test for 200 status code. So the happy path, the successful.

I should be able to test a four, a four, a four or a 500, the unhappy path. And so, these contracts allow us, the first test you should be doing is am I getting back the status code that I expect? And this is how you test for uptime. So every API endpoint should have a basic happy path 200 status code, and you should be running that on a schedule. Depending on the type of API, how dependent it is, you may want to test it every hour, every couple minutes. Or every week, just to make sure, "Hey, you're still up and I'm getting a 200 status code." That's the bare minimum testing you should be doing for every API. And it can be generated using OpenAPIs.

Now, the second layer of that is contract testing. So it's not just, "Hey, I got a 200 status code back. Hello." It may actually have an empty payload and there's nothing there. Contract tests allow us to say, "Oh, not only did we get a 200 status code back, we got back a valid..." Again, using the JSON Schema and the JSON Schema described in our contract that not only did we get a payload back, it's a valid payload that is described in the contract of the API. So we're actually validating the business contract, technically, but business contract for this API.

And we can bake that into the CI/CD pipeline. We can do that as a scheduled monitor that allows anyone to validate that that contract is being upheld at any moment in time from any cloud region.

Then from there you can go into performance testing. So hit those endpoints and go, "Hey, it handles a certain load. We get back a certain response in a number of milliseconds." And further describing that service level agreement that we have defined in our contract. So that's kind of the base level.

I would say, next, you should look at security. Security testing. Every API should have a standard set of security testing. I mentioned OWASP. And those security tests should also be in your CI/CD pipeline and run on a schedule to make sure nothing is changed. And by having this basic status code, contract, performance, and security testing, if you bake that into your CICD pipeline, nothing should get through. No new changes, nothing, unless it meets that baseline of testing. And then you have that added layer of scheduling these to make sure nothing changes while it's in production that would cause something to happen.

Thomas Betts: And then if someone is consuming your API can they use the same contract to generate a local test harness, basically like dependency injection to say, "I don't actually have that service available, but I can create a valid mock scenario really easily because of the spec?"

Kin Lane: These are further going down the API, lifecycle of different things folks can do with the OpenAPI. This is why I encourage API producers to share their OpenAPIs. Because then a consumer can take that, generate a mock. You can generate a mock server from that OpenAPI because it has all the details of the API, what you should get back. And then you could extend the testing to include user acceptance testing. And you mentioned FHIR specification or healthcare, you can do these to make sure HL7 FHIR specification providers meet their needs.

And anyone in the ecosystem can take these tests and run them and then have the evidence and say, "Hey, here's why it wasn't working. Or here's where it fell down. Here's the time period that it fell down." And so, it takes testing out of the hands of the producers and puts it into the hands of the consumers as well.

API Specifications For Non-Technical Stakeholders [27:03]

Thomas Betts: We've been fairly focused on the technical audience, the programmers and the people are supporting these systems. Do the specifications help non-technical users? Is it saying to communicate back up to the business and say here is what this does, and can the documentation be generated in such a way that it provides human readable? YAML is great as an option to JSON, but it's no more human readable to me than one or the other, but there are better documentation tools out there.

Kin Lane: I agree. YAML really doesn't... It's widened it a little bit, included more business stakeholders in that conversation. I'll give it credit there. But you still, like the HTML documentation, there's ways you can generate more usable visual hands on ways of communicating what an API does. You can see there's some experimentation going on in the AsyncAPI community around this. How do you create using D3.js and other kind of visual tools to visualize the events that happen across a complex system?

And all they're doing is taking the AsyncAPI, looping through it and finding all the events that get fired off. And then visualizing those and saying, "All right, here's the meaningful events that happen across your enterprise infrastructure." And I've seen similar things for request and response APIs using OpenAPI that create visual charts or mind maps or other visuals that are familiar to business folks to describe here's your digital resources or the digital capabilities that exist across your enterprise.

And so, OpenAPI and AsyncAPI and JSON Schema are being used to build catalogs that articulate to business. Here's your catalog of resources and digital capabilities. Here's what your enterprise organization is capable of. And then the lifecycle tooling, the tests, the documentation, other things can be used to say, "Here's the overall health and availability of that entire complex system." And there's a lot of work going on right now to help business leadership understand, see that landscape because over the last 20 years we've created a lot of different APIs.

Most enterprise organizations have hundreds if not thousands of APIs and microservices. And from a business standpoint to be able to see that landscape, I'm doing air quotes if people are just listening. Seeing that landscape is very difficult. And so, OpenAPI and AsyncAPI are how we're making that visible so that business leadership can better understand what exists, what needs to be evolved, changed, deprecated to ultimately move the enterprise forward.

What’s Coming For API Lifecycles [29:32]

Thomas Betts: I like that you're talking about things that are just in development now for visualization. Just to wrap up, what else do you see lying ahead for the specifications? And what's going to be one year from now or three to five years from now... What are going to be the big things we come out of having a standardized specification driven lifecycle?

Kin Lane: Well, I think more automation and more repeatability. Across that, we're able to scale quicker. But some of the tooling I'm seeing emerging right now are, here's what your teams say they've produced, they've created OpenAPIs and Async for their APIs and they have documentation. But then based upon actual traffic, it's then auto-generating and adding to those OpenAPIs and saying, "Here's what your team says. And then here's the reality on the ground in our crazy enterprise worlds." And so, API specifications are allowing us to map that landscape in real-time and then document it, mock it, test it like we already talked about.

But then you start thinking about the machine learning capabilities on top of that. So how do we identify vulnerabilities, weak authentication, behaviors and usage that don't fit normal patterns. Either go or bad, where's the innovation happening or where are the vulnerabilities and the exploitations are happening? So on top of that infrastructure that's OpenAPI driven, it's allowing us to better discover what our enterprise resources and capabilities are. But then use ML, machine learning to better inform ourselves. Because that's getting more sprawling, that landscape.

I mean think, our little cities are turning into big cities and it's just APIs on top of APIs. You have to be able to see it, make sense of it. And machine learning is really going to help us there. Secure it, make it more reliable and identify where the problems are. So there's a lot of opportunities around discovery, machine learning. You're seeing more regulatory emerge in this landscape. So that's coming. Right now, PST2 and banking, finance is seeing a lot of API regulations.

Healthcare in the US, the center for Medicaid and Medicare just last year mandated that if you do business in Medicaid or Medicare you have to have FHIR APIs. That's required. And so, you're going to see that in automobile. You're going to see that in all these other transit. So you're going to see more API regulations coming down the pipes. And as you mentioned, there's an OpenAPI for the FHIR specification. And that's how we test it. And that's how we validate it. And so, this is going to be how we scale these things at a global or city scale when it comes to our infrastructure.

Closing [32:09]

Thomas Betts: We'll have links to a lot of the stuff we talked about in the show notes. Is there anywhere people should go if they want to learn more about you, Kin?

Kin Lane: API Evangelist or just type my name Kin Lane, K-I-N L-A-N-E. Pretty unique name. It's easy to find information about me. If you want to tune into some of my work around the lifecycle, what I'm doing? I work at Postman. But you can go to apis.how, so apis.H-O-W. And that's my lifecycle work and my governance. We didn't even touch on governance here, how we consistently deliver APIs across this. Maybe that's a conversation for another future podcast that we can do.

Thomas Betts: That sounds great. Well, thanks, Kin, again for joining me today. And listeners, we'll hope you'll join us again for another episode of The InfoQ Podcast.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT