BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Gunnar Morling on Change Data Capture and Debezium

Gunnar Morling on Change Data Capture and Debezium

Lire ce contenu en français

Bookmarks

Today, on The InfoQ Podcast, Wes Reisz talks with Gunnar Morling. Gunnar is a software engineer at Red Hat and leads the Debezium project. Debezium is an open-source distributed platform for change data capture (CDC). On the show, the two discuss the project and many of its use cases. Additionally, topics covered on the podcast include bootstrapping, configuration, challenges, debugging, and operational modes. The show wraps with long term strategic goals for the project.

Key Takeaways

  • CDC is a set of software design patterns used to react to changing data in a data store. Used for things like internal changelogs, integrations, replication, and event streaming, CDC can be implemented leveraging queries or against the DB transaction log. Debezium leverages the transaction log to implement CDC and is extremely performant.
  • Debezium has mature source and sink connectors for MySQL, SQL Server, and MongoDB. In addition, there are Incumbating connectors for Cassandra, Oracle, and DB2. Community sink connectors have been created for ElasticSearch.
  • In a standard deployment, Debezium leverages a Kafka cluster by deploying connectors into Kafka Connect. The connectors establish a connection to the source database and then write changes to a Kafka topic.
  • Debezium can be run in embedded mode. Embedded mode imports Java library into your own project and leverages callbacks for change events. The library approach allows Debezium implementations against other tools like AWS Kinesis or Azure's Event Hub. Going forward, there are plans to make a ready-made Debezium runtime.
  • Out of the box, Debezium has a one-to-one mapping between tables and Kafka topic queues. The default approach exposes the internal table structure to the outside. One approach to address exposing DB internals is to leverage the Outbox Pattern. The Outbox Pattern uses a separate outbox table as a source. Inserts into your normal business logic tables also make writes to the outbox. Change events are then published to Kafka from the outbox source table.

Show Notes

What is Debezium and where does the name come from? -

  • 01:15 According to the FAQ, it's an element like plutonium, neptunium, uranium - and one of them is Debezium, which has DB in it, like databases - it connects to databases and is a cool element.

What does Debezium give you that a stored procedure doesn't? -

  • 02:00 The primary benefit is to react to data changes with very low latency.
  • 02:05 Typically with 10s or 100s of milliseconds, we can stream it to the outside world through Apache Kafka or similar, to which you could connect reactive customers to it.
  • 02:20 It allows you to react to your data changes with very low latency.

So how does it compare to doing data capture with a data store? -

  • 02:35 It's based on getting changes out of the transaction log, rather than using triggers in the transaction path that do some updates.
  • 02:55 If it's on the incoming execution requests then it could slow down the actual data request in the first place.
  • 03:00 If you're monitoring the transaction log, the capture happens asynchronously and the writes are not impacted at all.
  • 03:10 What sets Debezium apart is that it's open-source, and it works with a range of databases; MySQL, Postgres, and a few others.
  • 03:20 For a consumer, it wouldn't matter too much what the database the event was coming from - we have a rather generic abstract event representation.

So it's more log based than query, like a distributed log. -

  • 03:40 There's different ways that people could look at this; triggers have a performance overhead, for example.
  • 03:45 There is also an overhead of managing the trigger infrastructure so that it keeps up with the database schemas.
  • 03:50 DBAs are not particularly keen when it comes to lots of triggers.
  • 04:00 If you have a query-based infrastructure that is polling contents of tables for changes, there are a few disadvantages.
  • 04:10 There's a conflict between how often you do that and what kind of load it creates.
  • 04:15 If you poll frequently in order to pick up the latest changes, you add additional load onto the system.
  • 04:25 No matter how frequently you poll, you might miss a change; between two polling attempts, you might miss an update or an insertion and deletion.
  • 04:45 This isn't a problem in a log-based approach, because you see all the changes in the correct order.
  • 04:55 There's an impact on the design of the tables; for polling, you would have to have a pre-defined schema which allow for polling of most recent entries.
  • 05:00 With transaction based logs, you won't miss anything, without any changes or requirements on the data schema.

I've seen this used for audit purposes; what other use cases have you seen? -

  • 05:15 I have worked on this for two and a half years, but I've seen a lot of use cases.
  • 05:30 What most people use it for right now is replication - propagation of data between two different database vendors, for example.
  • 05:50 You can replicate into your data warehouse, or to stream changes into a search index with full text search capability like Elasticsearch or Solr.

Where does the project come from, and is it truly open source or is there an enterprise version? -

  • 06:30 It's completely open sourced, under the Apache open source licence.
  • 05:35 Red Hat, the company I work for, is sponsoring the project - but there is a big community around it.
  • 06:40 I just checked the contributor numbers just before recording, and right now we have 140 contributors to the project.
  • 06:50 We have short term bug fixers but also a number of people who have worked on this for a longer amount of time.
  • 07:00 I would like to grow the community further; we just added a new database connector for Casandra, which is led from WePay.
  • 07:15 They had it as an internal project first, and were originally using the MySQL connector, and after working on it open-sourced it.

How long has the project been around, and who is using it? -

  • 07:45 It was started by Randall Hauch who founded it in 2016, and I joined in 2017.
  • 08:00 We are seeing huge deployments in production; we're seeing discussions on the mailing list of people using it to stream changes out of a couple of hundred databases.
  • 08:20 It's great to get feedback from people using those sized environments; I don't have the ability to test that many databases myself.
  • 08:35 We are seeing huge deployments - Convoy, a logistics startup, BlaBlaCar, Trivago - we are compiling a list of public user references on the website.

What's the release frequency look like? -

  • 09:05 It follows the open-source way; it happens when it happens.
  • 09:10 We try to stick to a 3 week cadence, but it might not always be a final release.
  • 09:20 At the moment, we're in the beta phase, so it might be 1.0b2 and then three weeks later 1.0b3.
  • 09:30 We always stick to one version which we maintain; once we are at the 1.0 branch, we won't go back and update older branches.
  • 09:45 Everything is upstream; we have the Debezium organisation on GitHub.
  • 09:55 It's not an open-core model where certain features are held back for customer purchase; Red Hat doesn't have that.
  • 10:05 We have a separate repository for incubating connectors, but otherwise everything is in a single repository.
  • 10:10 The website is also up as a repository.

What connectors do you have available? -

  • 10:25 Elastic Search has a sync connector, which means it's usable from Debezium but not a part of Debezium.
  • 10:35 The connectors we have for databases are MySQL, Postgres, SQL Server, MongoDB; those are the most mature and stable ones.
  • 10:50 We have incubating connectors for Cassandra and Oracle.
  • 11:05 We have another one for IBM DB2, which came out of the acquisition of IBM and Red Hat.

How do you bootstrap a cache or Solr search? -

  • 11:50 Most people use Apache Kafka, with multiple nodes for scalability.
  • 12:05 You would have one topic for each table you have.
  • 12:15 There's another effort from Kafka umbrella called Kafka Connect, which is a runtime and framework for building and operating connectors.
  • 12:30 You have source connectors which get data into Kafka and sink connectors which take data out of Kafka topics and write them into another system.
  • 12:45 You can deploy Debezium as a set of source connectors, and then deploy them into Kafka Connect, which is its own process.
  • 12:55 The connectors would then establish a connection to the database, and when they are notified of a change they could write it into a Kafka topic.

What's the difference between using Kafka Connect and Debezium? -

  • 13:25 There is a JDBC source connector for Kafka Connect which is polling based, but you can use it for any database that has a JDBC connector.
  • 13:40 For a log-based approach, we need to have a dedicated connector for each type of database, as there's no way to do that generically.
  • 13:50 If your database doesn't support it, you can go with a JDBC polling connector with the obvious drawbacks mentioned previously.

Could you support other distributed event logs like Kinesis or EventHub? -

  • 14:00 This question comes up often; it's not supported out of the box, but you could get there yourself.
  • 14:10 You can use the embedded mode of Debezium, as a library which is embedded into a Java application.
  • 14:25 You would run a Java application with the Debezium libraries, and have a callback method which will be invoked when a new change event comes in.
  • 14:35 When you get the event, you can handle it how you want; you could send it to Kinesis, to Google PubSub etc.
  • 14:50 What we want to do, going forward, is to have some kind of standalone mode of Debezium, where you don't have to glue things together, and just configure it.

What version of Java do you support? -

  • 15:30 We only require Java 8, which is the minimum version, but it also works with Java 11 and we are testing it with the later ones.

Can you join streams from multiple databases? -

  • 15:55 You have multiple ways of getting there.
  • 16:00 You have a one-to-one relationship between topics and tables, which is the default, but you can re-route messages to a different topic.
  • 16:10 In Kafka Connect, they have a concept of message transformations, which allow you to modify messages as they are sent into Kafka or taking them out.
  • 16:20 One use case for these transformers is to route messages to a different topic.
  • 16:25 If you have change events from multiple tables, you could configure a message transformer to send all messages into the same topic.
  • 16:40 You would then be able to process the messages in order, but you would have to be careful about keying them.
  • 16:45 Another option is you could use a stream processing engine like Kafka streams.
  • 16:50 This would allow you to do the processing after-the-fact.
  • 16:55 You would need to have an app which consumes the events and then joins them afterwards based on a time window fashion.
  • 17:05 We have some plans to improve the usability; we would like to expose a topic which contains information about the transactional boundaries in the application.
  • 17:25 That would allow a Kafka application to wait for an aggregated set of events for that transaction.

Does Debezium address bounded contexts? -

  • 17:55 It's something you need to keep in mind if you are domain driven design practices.
  • 18:05 The issue that you need to consider is that you are exposing your table structure to the outside world; is that a problem, and if so, how can you deal with it?
  • 18:15 For a certain category of use cases, it's fine to do that - if you are using a search index, it's your data structure that you make searchable.
  • 18:30 If you change the internal model, by adding a column to the table, you would like to have that in the search index.
  • 18:40 There's another category of uses cases where it might be a problem, such as exchanging data between multiple microservices.
  • 18:55 You might not want to expose your customer data structure to your order microservice.
  • 19:00 Just using the message transformers might be enough to prepare your structure of your messages.
  • 19:10 Let's say you made a change to an internal model; you could use a message transformer to undo the change's effect, or to replicate both old and new names.
  • 19:20 Consumers would then have some grace period to be able to adjust to the change.
  • 19:35 The outbox pattern doesn't capture the structure of your application tables, so that when you make changes in a database schema you put those changes into an outbox queue.
  • 20:00 You would insert message records, with a column payload of a JSON structure, which is the messaging contract.
  • 20:15 You could then use Debezium to capture the inserts in the outbox table, and consumers could react to those changes.
  • 20:25 This is supported in Debezium, where we have a routing component which can route from the outbox table to multiple topics.
  • 20:45 The most important thing is that it decouples the internal structure from the external message structure.
  • 20:55 You just have to evolve the outbox message format in a compatible way.

Are there challenges like late arrivals that you have to address with other data sources? -

  • 21:30 There are things that you need to be aware of, and the guarantees that you get.
  • 21:40 If you're using Apache Kafka, you can rely on the ordering of messages in topic partitions.
  • 21:45 If you have a customer event topic, it could be partitioned across multiple nodes.
  • 22:00 The change event with the old and new content, and the primary key of the table, are routed to the same partition.
  • 22:25 If you have a consumer, and read from that topic for a specific customer, you see the changes in order as they applied to that customer.
  • 22:30 You have a guaranteed order of changes to a specific record, but you don't have a global ordering across all customers - but you rarely need that.

What does debugging look like? -

  • 23:10 One of the engineers working on Debezium was a QA engineer, so it's close to his heart.
  • 23:25 We expose full logging levels, so you can see at the fine level what the loggers are doing.
  • 23:35 There are metrics which you could expose via JMX to Graphana or Prometheus.
  • 23:45 You could get the throughput and lag, so you can keep up to date with the changes that are coming in.
  • 23:50 These will give you insight as to what is happening.

How do you deal with lag and different producer/consumer rates? -

  • 24:15 There's a notion of backpressure built in - if you can't keep up, you can do it as fast we can.
  • 24:25 The changes are stored in the database as long as we need them to be; at some point, we will have processed all of the changes, and we can notify the database we are checkpointing the log.
  • 24:45 We can then guarantee that we won't lose any changes, for example if the connectors are down.
  • 24:55 You need to be aware that the disk space may become an issue; if your connector is down for two weeks, and we cannot get changes, then the transaction logs will pile up.
  • 25:10 In Kafka, you can scale out your topics to manage the load - the community is usually happy with end-to-end latency.
  • 25:25 I know of one organisation who has a 2 second latency from the change being made in the production database to it being available in the data warehouse.

CQRS allows you to split up reads and writes; does Debezium help with this? -

  • 25:50 It's not the most common use case, but it's something people use it for.
  • 25:55 You could argue that once you sync the data into elastic search, you now have some sort of CQRS because you have a particular model that you can't query in the primary database.
  • 26:10 People are exploring this to denormalise their data into MongoDB, where you can have nested structures of data.
  • 26:25 If you have order lines and order headers, you will get two change streams out of them - but you might like to have a single document with a purchase order, lines etc.
  • 26:50 That's something I am eager to explore; I think Kafka streams can play an important role there; it would allow you to implement the denormalisation.
  • 27:00 If you have this metadata topic that I was referring to before, then we would allow a Kafka streams process to build that data in a transactionally consistent way.

What's on the roadmap for the next 12 months? -

  • 27:30 We have a roadmap of things we have in mind, but it's community driven, and ideas can come from the community and be prioritised on the roadmap.
  • 27:50 I would like to see moving forward with incubating connectors, like the IBM database one.
  • 28:00 I would like to have a Debezium server mode, which would allow you to send to Google Pub/Sub for example.
  • 28:10 We would like to have cloud events support; we have a cloud events spec from the CNCF with a standardised envelope.
  • 28:30 You could then use this with knative eventing to build your serverless eventing applications.
  • 28:35 Building those aggregate structures in a more transactionally safe way.
  • 28:40 Maybe some more primitives that you could integrate with Kafka streams, such as change data topics.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT