Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Cloud Native Computing Foundation Graduation of CloudEvents: Q&A with Clemens Vasters

Cloud Native Computing Foundation Graduation of CloudEvents: Q&A with Clemens Vasters

This item in japanese

Earlier this year, the Cloud Native Computing Foundation (CNCF) announced that CloudEvents had graduated. CloudEvents is a specification designed to expose event metadata in a standardized manner, which helps to ensure interoperability across platforms, services, and systems.

CNCF CloudEvents is the only event metadata model in the IT industry that is bound to all major messaging protocols and encodings. It allows events to be transferred without abstracting away any of the respective protocols' capabilities while still enabling the events to be moved through mixed protocol routes without losing metadata information.

Overview of CNCF CloudEvents (Source: LinkedIn Post)

InfoQ spoke with Clemens Vasters, a principal architect for messaging and stream processing at Microsoft and one of the drivers for Cloud Events.

InfoQ: Could you share some insights into Cloud Events' journey from its inception to its graduation by the CNCF?

Clemens Vasters:

The project started in a Seattle meeting room in late 2017 when a bunch of folks got together on an initiative from Google with the goal of finding common ground on how interoperable eventing may look. As it goes with efforts involving many big companies, there were some birthing pains around the governance model and figuring out the scope, but those discussions also helped build trust and get into a rhythm. We eventually landed on a consensus on two core principles: first, define the most minimal set of yet useful rules. Second, don't invent anything that has already been invented - and there, specifically, don't invent new protocols or encodings; integrate with what exists.

Some of the working group members still had vivid memories of the rise and fall of the SOAP and WS-* standardization efforts and the ensuing SOAP vs. REST debates. We were aware that there were some parallels in our aim to define an envelope model for event data and very intentionally looked to avoid some of the mistakes made back then. SOAP/WS-* bet on a single data encoding (XML) and tried to abstract application protocols into pure transport channels with new semantics layered on top, including - fatally - end-to-end security at its level.

In CloudEvents, we made opposite decisions in all those cases. We believe that users should be able to express events and event data in an encoding of their choosing, and we, therefore, have a minimal abstract type system. You can express a CloudEvent "on the wire" as a self-contained datagram that is encoded as you prefer, and we have formal "format" specifications for JSON, XML, Apache Avro, Google Protobuf, and AMQP encoding. We call these self-contained datagrams "structured events." Alternatively, and that is the smoothest way to add CloudEvents support to existing apps, you can map a CloudEvent directly onto the message model of an existing application protocol, whereby the CloudEvents metadata attributes become extension headers of that protocol. We have HTTP, MQTT, AMQP, NATS, and Kafka bindings, and there are further vendor-specific bindings. That means you can leverage all the strengths and features of the protocol/platform you are using and still transport a standardized event.

InfoQ: What considerations and principles guided the development and design of the CloudEvents specification, particularly in ensuring interoperability among different event-routing protocols like MQTT, HTTP, Kafka, and AMQP?


Because events are increasingly routed via multiple hops, starting at a device that sends them via MQTT or HTTP, then being copied over to Kafka, and then moved into an AMQP queue, we paid special attention that an event can always be mapped from and to the native protocol messages and the structured formats without information loss or ambiguities. Some decisions, like CloudEvents attribute names not allowing separators and only lower-case Latin characters, are simply the result of a thorough analysis of the interoperable character set across all those options.

Eventually, we landed on a CloudEvent's metadata, answering the following questions:

  • What kind is it? "type"
  • Where is it from? "source"
  • What is it about? "subject"
  • Which event is it? "id"
  • When was it raised? "time"
  • How is the event data encoded? "datacontenttype"
  • What schema for this content type does the event data conform to? "dataschema"
  • Which version of CloudEvents is it? "specversion"

That current version of the specification is "1.0," and after finishing that version, we are now focusing on extensions to that core and further formats and bindings. We are explicitly avoiding making a "2.0" but to protect the core spec so that it's a reliable foundation for everyone.

Patience and stability are essential in all standards-setting work, and the CNCF graduation shows that this patience pays off.

InfoQ: How has the industry's reception of CloudEvents been since achieving this milestone?


The adopter gallery on the homepage shows some of the most prominent platform users who have adopted CloudEvents. In my work at Microsoft, I see more and more enterprise customers incorporate CloudEvents in their designs even before they contact us to discuss some aspects of their solutions, which is a great sign. For Microsoft, I can say that CloudEvents is the event model we will generally converge on for events across all platforms where that hasn't happened already.

InfoQ: How do you envision the continued growth and evolution of CloudEvents within these ecosystems?


I see CloudEvents as a foundation for an interoperable ecosystem for event-driven applications.

  • The next step in that journey is a metadata model for declaring CloudEvents and their payloads and associating those CloudEvent declarations with application endpoints. The goal is for an event producer to be able to declare precisely ahead of time what events it may raise so that applications can be built on it. We want event flows to become "type-safe" and enable consumers to understand which types of events they can expect from a stream or topic. We aim to create a type-safety level for event flows in which generics and templates are added for collections in popular programming languages.
  • We are a couple of years into this effort, which is called "xRegistry", and we ended up defining a very generic, version-aware, extensible metadata registry model as a byproduct of sorts, which has the notable characteristic that it provides complete symmetry between a document format and a resource-oriented API. Here, as with CloudEvents, we define an abstract model. The API is currently projected into OpenAPI, and the document format is expressed in JSON and Avro schemas. We expect to have an XML representation for the document format, and it's absolutely feasible to express the API in an RPC binding or some other fashion.
  • The concrete registries defined in xRegistry are a version-aware schema registry for serialization and validation schemas (JSON Schema, Avro Schema, Protos, etc.), a message metadata registry that can declare CloudEvents and/or templates for MQTT, AMQP, Kafka, NATS, and HTTP messages with their payloads bound to the schema registry, and an endpoint registry that can catalog abstract and concrete application network endpoints which are bound to message definition registry. We have sketches for another registry with API contract definition documents like OpenAPI and AsyncAPI.

As with everything we've done in this working group, the principle is to invent only what needs to be invented but to be very thorough in the work we choose to do. We are also very deliberate about what we scope out. We don't think it's time to standardize the relationships between event channels before we can precisely describe a single event channel. That's why we leave the higher-level contract models alone for now, which would state what might come out of channel B if you send an event into channel A.

I think it would be cool to eventually have a formal contract model that reflects event-flow activity diagrams across several pillars. There is some old prior art in the ITU for this, but we're not there yet.

The LF AsyncAPI effort provides a simple contract model for event flows from the perspective of immediately connected parties. The prototypical code generator we use to validate the spec work can generate templated AsyncAPI documents and OpenAPI documents from an endpoint or message group definition in xRegistry. We think of those efforts as complementary, with xRegistry providing a foundation.

About the Author

Rate this Article