BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Building High-Fidelity Data Streams

Building High-Fidelity Data Streams

Bookmarks
41:40

Summary

Sid Anand discusses how they built a lossless streaming data system that guarantees sub-second (p95) event delivery at scale with better than three nines availability.

Bio

Sid Anand currently serves as the Chief Architect and Head of Engineering for Datazoom. Prior to joining Datazoom, Sid served as PayPal's Chief Data Engineer, where he helped build systems, platforms, teams, and processes. Prior to joining PayPal, Sid held senior technical positions at Netflix, LinkedIn, eBay, & Etsy to name a few.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Anand: My name is Sid Anand. I'd like to give you some historical context. Let's go back to the year 2017. In 2017, I was serving as the chief data engineer for PayPal. One of the items I was tasked with was building PayPal's Change Data Capture system. Change Data Capture, CDC, provides a streaming alternative to repeatedly querying a database. It's an ideal replacement for polling style query workloads, like the following, select * from transaction where account ID equals x, and the transaction date falls between two types. A query like this powers PayPal's user activity feed for all of its users. If you're a PayPal user, you might be familiar with it already. Users expect to see a completed transaction show up in this feed soon after they complete some money transfer, or a crypto purchase, or a merchant checkout. The query will have to be run very frequently to meet users' timeliness expectations. As PayPal's traffic grew, queries like this one became more expensive to run, and presented a burden on DB capacity. To give you a sense of the data scale at PayPal, in 2018, our databases were receiving 116 billion calls per day. The Kafka messaging system was actually handling the 400 billion messages a day that they were getting quite well. As you can see, from our Hadoop analytic side, we had a pretty sizable footprint of 240 petabytes of storage and 200,000-plus jobs running a day.

Given the fact that even at this scale, our databases were struggling, and that it seemed like Kafka was doing quite well, there was a strong interest in offloading DB traffic to something like CDC. However, client teams wanted guarantees around NFRs, or non-functional requirements, similar to what they had enjoyed with the databases. The database infrastructure, for example, had three nines of uptime, and queries took less than a second. Users never really had to worry about scalability or reliability. They wanted these same guarantees from the Change Data Capture system that we were presenting or offering to them. However, there were no off-the-shelf solutions that were available that met these NFRs. We built our own solution. We call this the Core Data Highway, and it's still the Change Data Capture system in use at PayPal today. If building our own solution, we knew that we needed to make NFRs first-class citizens. In streaming systems, any gaps in NFRs are unforgiving. When an outage occurs, resolving the incident comes with the added time pressure that comes with building and maintaining real-time systems. Lag is very visible, and customers are impacted. If you don't build your systems with the -ilities as first-class citizens, you pay a steep operational tax.

Core Data Highway

Here is a diagram of the system that we built. It's simplified, but it will get the point across. On the left of this diagram, imagine that the site is writing to these Oracle RAC clusters. These are called our site DBs. We have primaries and tertiaries and secondaries. If we focus on the primaries, we have 70-plus primary RAC clusters, 70,000-plus database tables are distributed over these clusters. Overall, this cluster takes 20 to 30 petabytes of data. It uses Oracle GoldenGate to replicate data to a dedicated RAC cluster called the DB Pump. The DB Pump's job is to reliably write this to high availability storage. You'll notice that we have like a list or a slew of Trail File Adapter processes that are written in Java, these are processes we wrote, that are attached to that storage. Their job is to read the Trail Files that are written there, which are essentially a proprietary Oracle format. Read those files, decode them, convert them into an Avro message. Register the Avro schema with the Kafka schema registry, and send the messages reliably over Kafka. On the other side of those Kafka topics, we have the storm cluster of message routers. These message routers, they do many jobs such as masking data, depending on sensitivity of data, for example. They also route data either to online consumers or to offline consumers. Online consumers, as shown above, are any consumers that need to serve traffic to the web app that you use, that you see every day such as the activity feed. The offline consumers using like our Spark clusters can write data to a variety of different databases using our open source schema library. They can also write to Hadoop where traditional data lake work can be done.

Moving to the Cloud

Let's move back to the more recent past, circa 2020. In 2020, I had left PayPal and was looking for my next gig. I joined a company named Datazoom that specializes in video telemetry collection. What interested me about Datazoom is that they wanted the same system guarantees in streaming data that I had built at PayPal. There were a few key differences, though. For example, at PayPal, our team assembled and managed bare metal machines. We also deployed them to a data center that PayPal owned and operated. We had full control from the application down to the bare metal, so we could really tune performance as we wished. Datazoom uses the public cloud, so it has whatever is offered as abstractions, basically. We don't have the same bare metal control. Additionally, PayPal had deep pockets, whereas Datazoom being seed funded, was running on a shoestring budget. Last but not least, PayPal's customers of CDH were internal customers, whereas at Datazoom, these were external customers. The reason I bring this up is that with external customers, your NFR SLAs are written into contracts, so it's a bit more strict and more rigor is needed to maintain these NFRs.

Building High Fidelity Data Streams

Now that we have some context, let's build a high-fidelity stream system from the ground up. When taking on such a large endeavor, I often like to start simple. Let's set a goal to build a system that can deliver messages from source S to destination D. First, let's decouple S and D by putting messaging infrastructure between them. This doesn't seem like a very controversial ask because this is a common pattern today. In terms of technology, we have a choice. For this example, I gave Kafka, and I specify a topic called the events topic in Kafka. Make a few more implementation decisions about the system. Run our system on a cloud platform, such as Amazon. Let's operate it at low scale. This means we can get by with a single Kafka partition. However, another best practice is to run Kafka across three brokers, split across three different availability zones, so let's adopt that practice because it's common. Since we don't have to worry about scale right now, let's just run S and D on single separate EC2 instances. To make things a bit more interesting, let's provide our system as a service. We define our system boundary using a blue box as shown below. The question we ask is, is this system reliable?

Reliability

Let's revise our goal. Now we want to build a system that can deliver messages reliably from S to D. To make this more concrete, what we really want is zero message loss. Once S has acknowledged a message to a remote sender, D must deliver that message to a remote receiver. How do we build reliability into our system? First, let's generalize our system. Let's pretend we have a linear topology as shown here with processes A, B, and C, separated by Kafka topics, and we see a message coming into the system. Let's treat the message like a chain. Like a chain, it's only as strong as its weakest link. This brings with it some insight. If each process or link is transactional in nature, the chain will be transactional. By transactionality, I mean at least once delivery, which is something offered by most messaging systems. How do we make each link transactional?

Let's first break this chain into its component processing links. We have A, which is an ingest node. Its job is to ingest data from the internet and reliably write it to Kafka. We have B as an internal node reading from Kafka and writing to Kafka. We have C as an expel node, it's reading from Kafka and writing out to the internet. What does A need to do to be reliable? It needs to receive this request, do some processing on it, and then reliably send data to Kafka. To reliably send data to Kafka, it needs to call KProducer.send with the topic and message, call flush. Flush will synchronously flush the data from the buffers in A to the three brokers in Kafka. Because we have ProducerConfig, acks equal all, A will wait for an acknowledgment from each of those brokers before it will acknowledge its sender, or caller. What does C need to do? C needs to read data as a batch off of a Kafka partition, do some processing, and reliably send that data out. Once it receives a 2xx response from the external destination it's sending to, it can manually acknowledge the message to Kafka. We have to manually acknowledge messages because we've set enable.auto.commit equal to false, and this will move the read checkpoint forward. If there's any problem processing this message, we will not acknowledge the message. If you use Spring Boot, you actually can send an ACK, which forces a reread of the same data. B is a combination of A and C. B needs to be reliable like A, in the sense that it needs to be a reliable Kafka Producer. It also needs to be a reliable Kafka Consumer.

How reliable is our system now? What happens if a process crashes? If A crashes, we will have a complete outage at ingestion. If C crashes, although we won't lose any inbound data, we won't be delivering this to external customers. For all intents and purposes, the customers will see this as an outage. One solution is to place each service in an autoscaling group of size T. Although autoscaling groups are concepts invented by Amazon, it's available in Kubernetes, and other cloud providers today. Essentially, what we do is we wrap each of these services A, B, and C in an autoscaling group of size T, which means that they can handle T minus 1 concurrent failures without this system being compromised.

Observability (Lag and Loss Metrics)

For now, we appear to have a pretty reliable data stream. How do we measure its reliability? To answer this, we have to take a little segue into observability, a story about lag and loss metrics. Let's start with lag. What is it? Lag is simply a measure of message delay in a system. The longer a message takes to transit a system, the greater its lag. The greater the lag, the greater the impact to the business. Hence, our goal is to minimize lag in order to deliver insights as quickly as possible. How do we compute it? Let's start with some concepts. When message m1 is created, the time of its creation is called the event time, and is typically stored in the message. Lag can be calculated for any message m1, at any node N in the system using the equation above. Let's have a look at this in practice. Let's say we create message m1 at noon. At time equals 12:01 p.m., m1 arrives at A, at 12:04, m1 arrives at node B, and at 12:10, m arrives at node C. We can use the lag equation to measure the lag at A as 1 minute, B at 4 minutes, and C at 10 minutes. In reality, we don't measure lag in minutes, but we measure them in milliseconds. More typical times are 1, 3, and 8 milliseconds. Another point to note here is that since I'm talking about times when messages arrive, this is called arrival lag or lag-in. Another observation to note is that lag is cumulative. The lag at node C includes the lag at upstream nodes, B and A. Similarly, the lag computed at B includes the lag at upstream node A. Just as we talked about arrival lag, there is something called departure lag, which is measuring the lag when a message leaves a node. Similar to arrival lag, we can compute the departure lag in each node. The most important metric in the system is called the end-to-end lag, which is essentially T6, or the departure lag at the last node in the system. This represents the total time a message spent in the system.

While it's interesting to know the lag for a particular message m1, it's of little use since we typically deal with millions of messages. Instead, we prefer to use population statistics like p95. Let's see how we can use it. We can compute the end-to-end lag at the 95th percentile in the system. We can also compute the p95 lag-in and lag-out at any node. Once we have the lag-in and lag-out at any node, we can compute the process duration at p95. This is the amount of time spent at any node in the chain. How can we visualize this? Let's say we have the topology shown here. This is from a real system that we own and run. We have four nodes in a linear topology, red, green, blue, and orange separated by Kafka. We compute each of their process durations, and we put them in a pie chart like the one on the left. What we see from this pie chart is that each of them take roughly the same amount of time. None of them appears to be a bottleneck in the system, and the system seems quite well tuned. If we take this pie chart and split it out, or spread it out over time, we get the chart on the right, which shows us that this contribution proportion essentially is stable over time. We have a well-tuned system that is stable over time.

Now let's talk about loss. What is loss? Loss is simply a measure of messages lost while transiting the system. Messages can be lost for various reasons, most of which we can mitigate. The greater the loss, the lower the data quality. Hence, our goal is to minimize loss in order to deliver high quality insights. How do you compute it? Let's take the topology we saw earlier, but instead of sending one message through, let's send a list of messages through. What we're going to do to compute loss is create what's called a loss table. In the loss table, we track messages as they transit the system, and as they go through each hop, we tally a 1. Message 1 and 2 made it through each of the four hops, so we tally a 1 in each of those cells. Message 3, on the other hand, only made it to the red hop, and it didn't seem to make it to any of the other hops. If we compute this for all of the messages that transit the system, we can compute the end-to-end loss, which shown here is 50%.

The challenge in a streaming data system is that messages never stop flowing, and also some messages are delayed at arrival. How do we know when to count? The solution is to allocate messages to one-minute-wide time buckets using the message event time. The loss table we saw earlier would actually apply to a single time. In this case, all messages whose event time fell in the 12:34th minute would be represented in this table. Let's say the current time is 12:40. We will see that some data will keep arriving due to late data arrival for the tables from 12:36 to 12:39. The table data in 12:35 has stabilized so we can compute loss, and any loss table before 12:35 can be aged out so that we can save on resources. To summarize, we now have a way to compute loss in a streaming system, we allocate messages to one-minute-wide time buckets. We wait a few minutes for messages to transit and for things to settle, and we compute loss. We raise an alarm if that loss is over a configured threshold such as over 1%. We now have a way to measure the reliability and latency of our system. Wait, have we tuned our system for performance yet?

Performance

Let's revise our goal once more. Now we want to build a system that can deliver messages reliably from S to D, with low latency. To understand streaming system performance, let's understand the components that make up end-to-end lag. First, we have the ingest time. The ingest time measures the time from the last_byte_in of the request to the first_byte_out of the response. This time includes any overhead associated with reliably sending data to Kafka, as we showed earlier. The expel time is the time to process and egest a message at D. This time includes time to wait for ACKs from external services like 2xx responses. The time between the expel and ingest is just called the transit time. If we take all of those three together, is the end-to-end lag, which represents the total time messages spend in the system from ingest to expel. One thing we need to know is that in order to build a reliable system, we have to give up some latency. These are called performance penalties. One performance penalty is the ingest penalty. In the name of reliability, S needs to call KProducer flush on every inbound API request. S also needs to wait for three ACKs from Kafka before sending its response. The approach we can take here is batch amortization. We essentially support batch APIs so that we can consume multiple messages of the request and publish them to Kafka together to amortize the cost over multiple messages.

A similar penalty is on the expel side. There is an observation we make, which is that Kafka is very fast. It's many orders of magnitude faster than the HTTP round trip times. The majority of the expel time is actually the HTTP round trip time. We take a batch amortization approach, again. At D, we read a batch of messages off of a Kafka partition. We then break it into smaller batches and send them in parallel. This maximizes our throughput, and basically gives us the best throughput for the fixed expel time, and also typically helps us with tail latencies. Last but not least, we have something called a retry penalty. In order to run a zero-loss pipeline, we need to retry messages at D that will succeed given enough attempts. We call these recoverable failures. In contrast, we should never retry a message that has a zero chance of success. We call these non-recoverable failures. Any 4xx response code, except for 429, which are throttle responses, are examples of non-recoverable failures. Our approach with respect to retry penalties, is that we know we have to pay a penalty on retry, so we need to be smart about what we retry, so we don't retry any non-recoverable failures, and also how we retry. For how we retry, we use an idea called tiered retries. Essentially, at each of the connectors in our system, we try to send messages a configurable number of times with a very short retry delay. If we exhaust the local retries, then node D transfers the message to a global retrier. The global retrier then retries the message over a longer span of time, with longer retry delays between messages. For example, let's say D was encountering some issues sending a message to an external destination. After it exhausts its local retries, it would send that message to the retry_in Kafka topic, which would be read after some configured delay time by the global retrier system, which would then publish the message back into the retry_out topic for D to retry again. This works quite well, especially when the retries are a small percentage of total traffic, just like single 1 or 2, maybe even less than 5%. Because since we track p95, the longer times to send these larger messages don't impact our p95 latencies. At this point, we have a system that works well at low scale. How does this system scale with increasing traffic?

Scalability

Let's revise our goal once more. We want to build a system that can deliver messages reliably from S to D, with low latency up to a scale limit. What do I mean by a scale limit? First, let's dispel a myth. There is no such thing as a system that can handle infinite scale. Every system is capacity limited. In the case of AWS, some of these are artificial. Capacity limits and performance bottlenecks can only be revealed under load. Hence, we need to periodically load test our system at increasingly higher load to find and remove performance bottlenecks. Each successful test raises our scale rating. By executing these tests ahead of organic load, we avoid any customer impacting message delays. How do we set a target scale rating? Let's look at the example here. Let's say we want to handle a throughput of 1 million messages or events per second. Is it enough to only specify the target throughput? No. The reason is, most systems experience increasing latency with increasing traffic. A scale rating which violates our end-to-end lag SLA is not useful. A scale rating must be qualified by a latency condition to be useful. An updated scale rating might look like this, 1 million events per second with a sub-second p95 end-to-end lag. How do we select a scale rating? What should it be based on? At Datazoom, we target a scale rating that is a multiple m of our organic peak. We maintain traffic alarms that tell us when our organic traffic exceeds 1 over m of our previous scale rating. This tells us that a new test needs to be scheduled. We spend that test analyzing and removing bottlenecks that would result in higher-than-expected latency. Once we resolve all the bottlenecks, we update our traffic alarms for a new higher scale rating based on m prime. When we first started out, m was set to 10. Anytime traffic went over one-tenth of our scale rating, we would schedule a new test for 10 times that value. As our scale increased over time, we changed them to smaller values like nine and eight, and so forth.

How do we build a system that can meet the demands of a scale test and meet our scale rating goals? Autoscaling is typically the way people handle this. Autoscaling has two goals. Goal one is to automatically scale out to maintain low latency. Goal two is to scale in to minimize cost. For this talk, we're going to focus on goal one. The other thing to consider is what can we scale. We can scale the compute in our system, but we cannot autoscale Kafka. We can manually scale it, but we can't autoscale it. Our talk is going to focus on autoscaling compute. One point to note is Amazon offers MSK Serverless, which aims to bridge this gap today in Kafka, but it is a bit expensive. At Datazoom, we use Kubernetes. This gives us all the benefits of containerization. Since we run on the cloud, Kubernetes pods need to scale on top of EC2, which itself needs to autoscale. We have a two-level autoscaling problem. As you see below, we have EC2 instances. On top of those, we have multiple Kubernetes pods, which could be from the same microservice or different ones. At our first attempt, we tried to independently scale both of these. We used EC2 autoscaling based on memory for EC2 autoscaling, and we used the Horizontal Pod Autoscaler provided by Kubernetes, using CPU as the scaling metric. This all worked fine for a while. We use CloudWatch metrics as the source of the signal for both of these autoscalers.

This worked fine until November 25, 2020. On that date, there was a major Kinesis outage in U.S.-East-1. Datazoom doesn't directly use Kinesis, but CloudWatch does. If CloudWatch were to go down, we wouldn't expect anything bad to happen, we just expect there'll be no further autoscaling behavior. Whatever the footprint was at the time of the outage is where everything should land. That is not what happened with HPA. When the signal stopped being seen by HPA, HPA scaled in our Kubernetes clusters and pods down to their Min settings. When this occurred, this caused a gray failure for us, which is essentially high lag and some message loss. To solve this problem, we've adopted KEDA, which is K8s event driven autoscaler. Now if CloudWatch goes down, HPA will not scale in its pods, it will act exactly like the EC2 autoscaler will. It keeps everything stable. We also switched away from doing our own memory-based autoscaling to using K8s Cluster Autoscaler. We've been using it for about a year and it worked quite well.

Availability

At this point, we have a system that works well within a traffic or scale rating. What can we say about the system's availability? First of all, what is availability? What does availability mean? To me, availability measures the consistency with which a service meets the expectations of its users. For example, if I want to SSH into a machine, I want it to be running. However, if its performance is degraded due to low memory, uptime is not enough, any command I issue will take too long to compute or complete. Let's look at an example of how uptime applies to a streaming system. Let's say we define a system as being up as anything which has zero loss, and that meets our end-to-end lag SLA. We may consider things that are down as cases where we lose messages, or we're not accepting messages, or perhaps there is no message loss in the system but lag is incredibly high, we could call that down. Unfortunately, everything in the middle which is gray is also a problem for users. To simplify things, we define down as anything that isn't up. Essentially, if there is any lag beyond our target, it doesn't have to be well beyond but just beyond our target, the system is down.

Let's use this approach to define an availability SLA for data streams. Consider that there are 1440 minutes in a day, for a given minute such as 12:04, we calculate the end-to-end lag p95 for all events whose event time falls in that minute. If the lag needs to be 1 second to meet SLA, then we check, is the end-to-end lag less than 1 second? If it is, that minute is within SLA. If it's not, that minute is out of SLA. It's out of our lag SLAs. In the parlance of the nines model, this would represent a downtime minute, because that minute is outside of our lag SLAs. You may be familiar with the nines of availability chart below. The left column shows all the different possible availabilities, represented as nines. The other columns represent how many days, hours, or minutes that amount of downtime amounts to. Let's say we are building a three nines uptime system. The next question we need to ask is, do we want to compute this every day, quarter, month, week, year? At Datazoom, we compute this daily, which means that we cannot afford more than 1.44 minutes of downtime, which means that for no more than 1.44 minutes, can our lag be outside of SLA, as lag SLA? Can we enforce availability? No. Availability is intrinsic to a system. It is impacted by calamities, many of which are out of our control. The best we can do is to measure it and monitor it and to periodically do things that can improve it.

Practical Challenges

Now we have a system that reliably delivers messages from source S to destination D, at low latency, below its scale rating. We also have a way to measure its availability. We have a system with the NFRs that we desire. What are some practical challenges that we face? For example, what factors influence latency in the real world? Aside from building a lossless system, the next biggest challenge is keeping latency low. Minimizing latency is a practice in noise reduction. The following factors are factors that we've seen to impact latency in a streaming system. Let's start with the first one which are network lookups. Consider a modern pipeline as shown below. We have multiple phases. We have ingest followed by normalize, enrich, route, transform, and transmit. A very common practice is to forbid any network lookups along the stream processing chain above. However, most of the services above need to look up some config data as part of processing. A common approach would be to pull data from a central cache, such as Redis. If lookups can be achieved in 1 to 2 milliseconds, this lookup penalty is acceptable. However, in practice, we notice that there are 1-minute outages whenever failovers in Redis HA occur in the cloud. To shield ourselves against any latency gaps such as this, we adopt some caching practices. The first practice is to adopt Caffeine as our local cache across all services. Caffeine is a high performance near optimal caching library, written by Ben Manes, one of the developers who first worked at Google on the Guava caches there, and then spun caffeine out.

Here's an example of using Caffeine. We create the builder with two parameters. One is called the staleness interval, and the other one is the expiration interval. Let's look at that in practice. Let's say we initialize this with a 2-minute staleness interval and a 30-day expiry interval. At some time, we load a cache entry from Redis, and we get version 1 of that cache entry loaded into Caffeine. As soon as that object is written, Caffeine will start a refreshAfter Write timer of 2 minutes. If the application does a lookup on that cache entry, it will get it from Caffeine, it'll get version v1. If it does it again, after the refreshAfter Write interval of 2 minutes, it will still immediately get version 1 of the object, but behind the scenes, it would trigger an asynchronous load of that object from Redis. Soon as that v2 version is written into the local cache, the refreshAfter Write will get extended by 2 minutes. If the application does a lookup, it will get version 2 of the entry. The benefit of this approach is that the local application or the streaming application never sees any delays related to lookups from Redis. The other approach that we've taken is to eager load, or preload all cache entries for all caches into Caffeine at application start. By doing this, a pod is not available for any kind of operation such as streaming as like a node A, B, or C in our system, until it's got its caches loaded. This means that we've now built a system that does not depend on latency or availability of Redis.

The other major factor that impacted latency for us was any kind of maintenance or operational processes, these created noise. Any time AWS patched our Kafka clusters, we notice large latency spikes. Since AWS patches brokers in a rolling fashion, we typically see about 3 latency spikes over 30 minutes, and these violate our availability SLAs. Here's an example of something that we saw. On the upper left, we see the end-to-end lag for one of our connectors, TVInsight. We see that the p95 is around 250 milliseconds. However, whenever there's a Kafka MSK maintenance, MSK meaning the managed service of Kafka that we use, whenever there's an MSK maintenance event, each broker causes a spike that's over a minute. This is unacceptable. To deal with this, we adopted a blue-green operating model, between weekly releases or scheduled maintenance, one color side receives traffic, and that's called the light side, while the other color side receives none, and that is called the dark side. In the example below, the blue side is dark, this is where patches are being applied. On the light side, the green side, customer traffic is flowing. We alternate this with every change or maintenance that we do. One may be concerned about cost efficiency because we are basically doubling our costs. What we do is scale the compute down to zero to save on those costs but we do have to bear the cost of duplicate Kafka clusters. By switching traffic away from the side scheduled for AWS maintenance, we avoid these latency spikes. With this success, we applied it to other operations as well. Now all software releases, canary testing, load testing, scheduled maintenance, technology migrations, outage recovery, benefits and uses the blue-green model. When we get a traffic burst that's much higher than our scale rating for either blue or green, we just open up both sides 50/50, and we double our capacity that way.

Last but not least, we have another challenge which is related to an interplay between autoscaling and Kafka rebalancing. To understand this, we have to talk about how Kafka works. Whenever there is autoscaling, we're adding Kafka Consumers to a given topic. Topics are split into physical partitions that are assigned to these Kafka Consumers, in the same consumer group. During autoscaling activity, we're changing the number of consumers in the consumer group, so Kafka needs to reassign partitions or rebalance. Whenever it does this with the default rebalancing algorithm, it's essentially a stop the world action. This is via the range assignor. This pretty much causes large pauses in consumption, which result in latency spikes. An alternate rebalancing algorithm which is gaining more popularity is called KICR, this is called Kafka Incremental Cooperative Rebalancing using the cooperative sticky assignor. To show in practice, the left side of this chart shows all of our end-to-end lag across all our streaming connectors. As you see, they are very well-behaved. On the right side, this uses our default rebalancing. All of these spikes that you see are related to minor autoscaling behavior that's constantly happening behind the scenes to deal with traffic. There is one challenge we ran into with the cooperative sticky assignor, we noticed that there were significant duplicate consumption during this autoscaling behavior. In one load test, we noticed it was more than 100% at a given node. We have opened a bug for this but we think there might be a fix, and we are currently evaluating some approaches and workaround for this problem.

Conclusion

We now have a system with the NFRs that we desire. We also have shown a real-time system and challenges that go with it.

 

See more presentations with transcripts

 

Recorded at:

Oct 18, 2023

BT