At QCon London 2026, Peter Morgan introduced Tansu, an open-source, Apache Kafka-compatible messaging broker he has been building solo for the past two years. The pitch was simple and deliberately provocative: what if you kept Kafka's protocol, however, threw out everything else, the replication, the leader elections, the permanent broker state, and replaced it with stateless brokers that delegate durability entirely to external storage?
Morgan, who has spent over a decade building event-driven systems on top of Kafka, including platforms for Disney's MagicBand and large-scale betting systems, opened with the core assumption that separates Tansu from Kafka. Kafka achieves resilience by replicating data between brokers. Tansu assumes that storage is already durable and resilient, and builds everything from that premise.
The practical consequences are significant. Kafka brokers are what Morgan called "pets." They have identities, need extensive configuration, run 24/7 with four-gigabyte heaps, and scaling them down is so rare that he asked the audience whether anyone actually does it. One hand went up. Tansu brokers are "cattle." They carry no state, have no leaders, run in about twenty megabytes of resident memory, and can scale to zero and back up in roughly ten milliseconds. Morgan joked that the first rule of Kafka is not to mention you're running it, because everyone in your department will immediately want topics on your cluster.

In a live demo, Morgan deployed Tansu to Fly.io as a forty-megabyte statically linked binary in a from-scratch container image, no operating system, just the binary and some SSL certificates. He configured it to scale to zero using Fly's proxy, created a topic with standard Kafka CLI tools, produced a message, killed the broker, and then consumed the message. The broker woke up automatically when the consumer connected. The entire deployment ran on a 256MB machine.
The storage architecture is where Tansu gets interesting. Rather than a single built-in storage engine, it offers pluggable backends selected via a URL parameter: S3 (or compatible stores like Tigris and R2) for diskless operation, SQLite for development environments where you want to copy a single file to reset state between test runs, and Postgres for teams that want their streaming data to land directly in a database. Morgan was candid about his favourite: Postgres. The original motivation for the project, he explained, was watching data flow through Kafka topics only to end up in a database anyway, and wondering why the intermediate step was necessary.
The Postgres integration goes beyond using it as a store. Morgan showed how Tansu originally used sequential INSERT statements to write records, which became a bottleneck because each execution requires a round-trip response. He replaced this with Postgres's COPY FROM protocol, which streams rows into the database without waiting for individual acknowledgements. A single COPY FROM setup, a stream of COPY DATA messages, and one COPY DONE at the end. The result is substantially higher throughput for batch ingestion. And because a produce in Tansu is just an INSERT (or COPY) and a fetch is just a SELECT, the transactional outbox pattern simply disappears: you can atomically update business data and queue a message in the same database transaction using a stored procedure that Tansu provides.

Schema validation is another area where Tansu diverges from Kafka. In standard Kafka, schema enforcement relies on a separate registry and is optional at the client. In Tansu, if a topic has a schema Avro, JSON, or Protobuf, the broker validates every record before writing it. Invalid data gets rejected at the broker, not the client. Morgan described this as a deliberate trade-off: it's slower than Kafka's pass-through approach because the broker must decompress and validate each record, but it guarantees data consistency regardless of which client produces.
That broker-side schema awareness also enables something Tansu does that Kafka cannot: writing validated data directly into open table formats. If a topic has a Protobuf schema, Tansu can automatically write records to Apache Iceberg, Delta Lake, or Parquet, creating tables, updating schemas on change, and handling metadata. In addition, Morgan comments:
It actually works for AVRO, JSON, and Protobuf. Protobuf is the "best" because it has a built-in mechanism for backwards-compatible schema changes (and the one I used in the demo), but they can all be written as Parquet/Iceberg/Delta.
A "sink topic" configuration skips the normal storage entirely and writes exclusively to the open table format, turning Tansu into a direct pipeline from Kafka-compatible producers to analytics-ready data.

As a proxy, Tansu can also sit in front of an existing Kafka cluster, proxying requests at 60,000 records per second with sub-millisecond P99 latency on modest hardware, 13 megabytes of RAM on a Mac Mini.
Morgan was upfront about the gaps. SSL support is present but being reworked. There's no throttling or access control lists yet. Compaction and message deletion aren't implemented on S3. Share groups are not planned. The project is written in asynchronous Rust, Apache-licensed, and actively looking for contributors. All examples, including the Fly.io deployment demo, are available on GitHub.