Quesma has launched OTelBench, an open-source benchmarking suite designed to measure the performance of OpenTelemetry pipelines and the effectiveness of AI agents in implementing and maintaining observability configuration.
The tool provides a unified framework for evaluating both the technical limits of observability infrastructure and the efficiency of Large Language Models in automated Site Reliability Engineering tasks. By combining these two domains, the suite aims to provide verifiable, evidence-based data for platform engineers navigating the complexities of modern cloud-native monitoring.
The initial scope of the project focuses on the performance and reliability of OpenTelemetry pipelines under high-load scenarios. As cloud environments generate increasing volumes of telemetry data, identifying performance bottlenecks within the collector becomes essential for maintaining system stability. OTelBench simulates various traffic patterns to measure key performance indicators such as throughput, latency, and resource consumption across processors and exporters. This allows teams to validate their hardware requirements and configuration settings before deploying changes to production.
In addition to infrastructure testing, the suite has expanded to evaluate how AI agents handle the trade-offs between data resolution and system overhead. While frontier models demonstrate high general coding proficiency, recent results from the benchmark reveal a significant gap in production-grade instrumentation tasks. Even state-of-the-art models often struggle with context propagation and distributed tracing, frequently achieving success rates below 30 per cent in real-world scenarios that cover complex aspects of the OpenTelemetry specification.
Przemysław Delewski, founder of Quesma, highlighted the motivation behind the project in a recent announcement. "Recently we built OTelBench, a benchmark that allows comparing OpenTelemetry performance between different setups and configurations," says Delewski. The framework now serves a broader role by providing a reproducible environment to test whether automated SRE solutions can accurately implement monitoring without producing malformed traces or silent failures.
The project exists alongside more traditional methodologies, such as the internal benchmarks maintained by the OpenTelemetry project for its collector components. While engineers have historically utilised generic load testing tools such as k6 or Gatling to simulate OTLP traffic, these options generally lack the integrated evaluation of agentic automation provided by the Quesma suite. The objective nature of the benchmark ensures it remains vendor-neutral, allowing testing of various exporters for open-source backends such as Prometheus and Jaeger.
By automating the evaluation of both human-configured pipelines and AI-driven instrumentation, the tool reduces the manual effort required to validate infrastructure changes. Users gain deeper insights into how internal buffering and queuing strategies manage sudden traffic spikes, regardless of whether the configuration was generated by a developer or an algorithm. This facilitates the creation of robust observability frameworks that scale alongside backend services without triggering unexpected performance regressions or data loss.