Key Takeaways
- The sidecar pattern decouples cross-cutting concerns from the business logic components, thereby enhancing maintainability and reducing complexity.
- Sidecars can be built alongside your microservices, but they can be built using a different technology than the one used to build your microservices.
- Sidecars can be reused across multiple services to provide out-of-the-box support for configuration, logging, tracing, and publish-subscribe messaging.
- The sidecar pattern can help reduce coupling between components and enhance the scalability, maintainability, and efficiency of microservices-based applications without adding complexity.
- While sidecars are well-suited for providing out-of-the-box support for implementing cross-cutting capabilities, you may often prefer not to use them for ultra-latency sensitive workloads to avoid additional network hops and resource overhead.
Today's applications require monitoring, logging, configuration, etc. Each of these concerns can be implemented as a component or a service. These cross-cutting concerns can be tightly integrated into the application. While this tight coupling ensures effective use of shared resources, an outage in any of these components can take your application down. Enter the sidecar design pattern.
The sidecar design pattern helps keep dynamic services (i.e., microservices) fueled with the resources and data they need while keeping them lightweight and free from the burden of carrying large amounts of internal logic. In this article, we'll examine the sidecar design pattern, its benefits, and learn how to implement it in a microservices-based application. We’ll also discuss common issues you typically face with the sidecar and how to mitigate them.
Prerequisites
You should have Visual Studio, ASP.NET Core, and Docker installed in your system to work with the code examples discussed in this article. Note that when you install Visual Studio on your computer, ASP.NET Core can also be installed at the same time using the Visual Studio Installer.
Download Visual Studio and Docker Desktop. You will also need Elastic Search, which we will install from NuGet.
What is Microservices Architecture?
A microservice architecture comprises a conglomeration of services built using disparate languages and technologies. Managing the dependencies for these language-specific interfaces often adds significant complexity. Moreover, because of their dispersed nature, the microservices architecture introduces several challenges.
When building a distributed microservices-based application, addressing cross-cutting concerns such as logging, authentication, and authorization can be challenging. Here is exactly where the sidecar pattern can help.
What is the Sidecar Design Pattern?
The sidecar pattern helps isolate and encapsulate application components by deploying them into a separate process or container. "Sidecar" is the term for this design pattern because it resembles a sidecar connected to a motorbike. Essentially, the sidecar design pattern helps you build applications that comprise disparate components and technologies.
The sidecar design pattern is often implemented using containers, with secondary containers, called "sidecars", running alongside the main application.
These sidecar containers provide additional functionality for your application and manage tasks that do not need to be included in the primary application, such as logging, monitoring, configuration, and security.

Figure 1. Sidecar illustration
The sidecar is coupled to a parent application, has a lifetime analogous to its parent's, and is built and disposed of together with its parent. If you're using the sidecar container alongside your primary container that hosts the ASP.NET Core microservices, the primary container will handle the main business functionality of the application, while the sidecar container manages the auxiliary responsibilities, such as the following:
- Logging
- Monitoring
- Distributed tracing
- Security enforcement
- Service discovery
- Routing traffic
- Communication
Why do we need the Sidecar Design Pattern?
Here are the benefits of the sidecar design pattern at a quick glance:
- Reduces complexity by isolating cross-cutting concerns into distinct components that run independently of the primary application.
- Language-agnostic so you can build it in many different programming languages.
- Reduced code redundancy by including all necessary modules and running alongside the microservices.
- Reduced latency by using localhost/shared networking (although sidecars can introduce some latency compared to in-process solutions).
- Enhanced extensibility by attaching a sidecar as a separate process to the same host or subcontainer, allowing applications to be extended as needed.
Challenges in Implementing Logging in Distributed Applications
In this section, we’ll examine the challenges faced in logging in distributed applications and also understand how the sidecar design pattern can help here.
The Problem: Overhead of Logging in Microservices-Based Applications
Logging is a cross-cutting concern often used in applications to capture and store event records during execution. Logging is more commonly used in distributed applications to monitor application behavior at runtime, capture performance-related metadata, and identify issues.
In a typical microservices-based application, however, logging may introduce significant overhead. For example, due to massive volumes of log data across distributed services and increased resource consumption (e.g., CPU, memory, and network) for log collection, aggregation, and transmission to backend components.
As a result, this increases latency while reducing the application's throughput. Additionally, because microservices are ephemeral, aggregating logs can be more challenging in a dynamic environment. You need a correlation ID to correlate your distributed microservices, but that incurs additional processing.
The Solution: Decoupling the Logging Functionality Using the Sidecar Pattern
The sidecar design pattern can help mitigate the described challenges. It helps isolate concerns, formats data per your application's requirements, and minimizes complexity and code redundancy. Leverage the sidecar design pattern to standardize logging in your application, collect metrics, and monitor its health without altering your main application's codebase.
Implementing Distributed Logging in Microservices Architecture Using the Sidecar Pattern
In this section, we’ll examine how we can implement distributed logging in a microservices-based application and how sidecar containers can help collect and consolidate logs for each microservice. To build this application, we’ll take advantage of the following technologies and tools:
- Visual Studio (IDE)
- ASP.NET Core (a web application development framework)
- C# (Programming Language)
- Docker Desktop for Windows (a containerization tool)
- Elasticsearch (from NuGet)
This application represents a typical inventory management system, and comprises two microservices (i.e., the Transactions API and the sidecar API). While the former acts as a producer, sending log messages, the latter consumes them and then sends them to Elasticsearch.
Note that the Transactions API does not call the sidecar API directly. Instead, the Create action method in the Transactions controller sends the log messages to a concurrent queue, which in turn stores them in a text file residing in a shared folder on the local file system. The sidecar API reads these stored messages from the shared folder, processes them, and then sends them to Elasticsearch.
Here is the complete flow of this application at a glance:
- A client calls the HTTP Post endpoint represented by the Create action method on the TransactionsController.
- Instead of writing to disk or sending the message directly to Elasticsearch, the Create action method adds it to a custom concurrent queue.
- The controller returns the HTTP response immediately; log persistence is offloaded to the background service.
- A background service in the TransactionsAPI uses a thread-safe file logger to persist these messages to a text file in a shared folder.
- In the SidecarAPI, another background service reads these stored log messages from the local file system.
- Finally, the SidecarAPI background service sends the log messages to Elasticsearch.
In this application, we'll create the following types:
TransactionsAPI
- TransactionRequest record
- LogLevels enum
- TransactionType enum
- TransactionsController class
- ISidecarMessageQueue interface
- SidecarMessageQueue class
- ThreadSafeFileLogger class
- IThreadSafeFileLogger interface
- TransactionsBackgroundService class
SidecarAPI
- LogMessage record
- LogsController class
- IElasticSearchClientService interface
- ElasticSearchClientService class
- SidecarBackgroundService class
- SidecarSettings class
A typical inventory management system comprises the following entities: Product, Stock, Transactions, Supplier, Customer, and Orders. For simplicity and brevity, we'll use only a Transaction entity in this example. To implement this application, we'll follow these steps:
- Create a blank solution in Visual Studio
- Create the TransactionsAPI ASP.NET Core Web API project and add it to the solution
- Create the SidecarAPI ASP.NET Core Web API project and add it to the solution
- Create the Dockerfile for both the microservices
- Create the Docker Compose file to run the microservices
- Build and run the docker compose stack
Create an Empty Solution
Launch your Visual Studio IDE and select "Blank Solution" as the project template to create a new empty solution that contains no projects. You can name this empty solution "InventoryManagementSystem".
Create the TransactionAPI and the SidecarAPI Projects
Because we will be using two microservices (TransactionsAPI and SidecarAPI) in this example, you should have separate projects for each. Now, follow the steps outlined below to create two new projects in the solution that correspond to each of these microservices:
- Right-click on the solution in the Solution Explorer window and select "Add -> New project…".
- In the "Add a new project" window, select the option "ASP.NET Core Web API" as the project template.
- Click on "Next"
- In the "Configure your new project" window, specify the project name as TransactionsAPI and the location in your computer where you would like the new project to be saved.
- Click on "Next"
- In the "Additional Information" dialog window, specify the version of the framework to be used.
- Select the checkbox "Enable container support" and specify Linux as the Container OS.
- Lastly, click on "Create"
Repeat the same steps to create the SidecarAPI microservice as well. Figure 2 shows how the Solution Explorer should look:

Figure 2: The Solution Explorer showing both projects
Create the TransactionRequest Entity
Create a new record type named TransactionRequest in a file named TransactionRequest.cs in the TransactionsAPI project. This type will be used to store transaction data in the memory. Replace the default generated code using the following piece of code:
public record TransactionRequest
{
public required int TransactionId { get; init; }
[JsonConverter(typeof(JsonStringEnumConverter))]
public required TransactionType TransactionType { get; init; }
public required DateTime TransactionDate { get; init; }
public required int TransactionQuantity { get; init; }
}
Create the TransactionType Enum
To better organize our source code, you can use an enum to represent transaction type (i.e., pending, dispatched, etc.) as shown in the code snippet given below:
public enum TransactionType
{
Pending,
Dispatched,
Shipped,
Delivered,
Cancelled
}
Create the Transaction Microservice
The TransactionsAPI corresponds to the microservice that processes business transactions and generates logs. For the sake of simplicity, the business processing logic hasn't been provided in this example.
Here is how the TransactionsAPI works:
- The client makes an HTTP POST /api/transactions endpoint, passing the required transaction data.
- The action method corresponding to this endpoint sends or adds the transaction messages to an in-memory queue.
- The TransactionBackgroundService runs at regular intervals of time, dequeues these messages, and stores them in a text file in a shared folder.
Create the Thread Safe File Logger
In the TransactionsAPI microservice, we’ll create a file logger to store messages in a text file. To do this, we’ll create two types, an interface named IThreadSafeFileLogger and a class called ThreadSafeLogger that implements the methods of the interface.
The following code listing shows the IThreadSafeFileLogger interface:
public interface IThreadSafeFileLogger
{
Task SendMessageAsync(string message);
Task SendMessageAsync(string level, string message);
}
The following code listing illustrates how the ThreadSafeFileLogger class takes advantage of semaphore to ensure that the file write operation is thread safe, i.e., no two threads can access the critical section in the SendMessageAsync method concurrently.
public class ThreadSafeFileLogger: IThreadSafeFileLogger
{
private static readonly SemaphoreSlim _semaphore = new(1, 1);
private readonly IConfiguration _configuration;
private readonly string _filePath;
public ThreadSafeFileLogger(IConfiguration configuration)
{
_configuration = configuration;
_filePath = _configuration["ApiKeys:FilePath"] ??
throw new InvalidOperationException("Path to file missing ...");
}
public async Task SendMessageAsync(string message)
{
await _semaphore.WaitAsync();
try
{
await File.AppendAllTextAsync(_filePath,
$"{Guid.NewGuid().ToString()} | {message}{Environment.NewLine}");
}
finally
{
_semaphore.Release();
}
}
public async Task SendMessageAsync(string level, string message)
{
await _semaphore.WaitAsync();
try
{
await File.AppendAllTextAsync(_filePath,
$"{Guid.NewGuid().ToString()} | {level} | {message}{Environment.NewLine}");
}
finally
{
_semaphore.Release();
}
}
}
Create a Background Service in the TransactionsAPI Microservice
In the TransactionsAPI microservice, the TransactionBackgroundService class extends the BackgroundService class and implements the ExecuteAsync method. This method would be called at regular intervals, as shown in the following piece of code:
public class TransactionsBackgroundService : BackgroundService
{
private readonly TimeSpan _period = TimeSpan.FromSeconds(5);
private readonly ILogger<TransactionsBackgroundService> _logger;
private readonly IServiceProvider _serviceProvider;
public TransactionsBackgroundService(ILogger<TransactionsBackgroundService> logger, IServiceProvider serviceProvider)
{
_logger = logger;
_serviceProvider = serviceProvider;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
using PeriodicTimer timer = new PeriodicTimer(_period);
using IServiceScope scope = _serviceProvider.CreateScope();
var _transactionsMessageQueue = scope.ServiceProvider.GetRequiredService<ISidecarMessageQueue>();
var threadSafeFileLogger = scope.ServiceProvider.GetRequiredService<IThreadSafeFileLogger>();
while (!stoppingToken.IsCancellationRequested &&
await timer.WaitForNextTickAsync(stoppingToken))
{
_logger.LogInformation("Executing PeriodicBackgroundTask");
while (_transactionsMessageQueue.Count > 0)
{
string message = await _transactionsMessageQueue.Dequeue();
await threadSafeFileLogger.SendMessageAsync(message);
}
}
}
}
Create the Sidecar Message Queue in the TransactionsAPI Microservice
We'll also create a custom message queue in the TransactionsAPI microservice project to store log messages generated by the Transactions Controller. The following code listing shows the ISidecarMessageQueue interface that contains the declaration of the Enqueue and Dequeue methods.
public interface ISidecarMessageQueue
{
int Count { get; }
Task Enqueue(string level, string message);
Task<string> Dequeue();
Task ClearAsync();
}
The SidecarMessageQueue class implements this interface as shown in the following piece of code:
public sealed class SidecarMessageQueue: ISidecarMessageQueue
{
private readonly ConcurrentQueue<string> queue = new ConcurrentQueue<string>();
public async Task Enqueue(string level, string message)
{
string str = await BuildMessage(level, message);
queue.Enqueue(str);
}
public async Task<string> Dequeue()
{
if(queue.TryDequeue(out string? message))
{
return message;
}
return string.Empty;
}
private async Task<string> BuildMessage(string level, string message)
{
return $"{level} | {message}{Environment.NewLine}";
}
public int Count => queue.Count;
public async Task ClearAsync()
{
while (queue.TryDequeue(out _)) { }
}
}
Note that in the preceding code listing, although the BuildMessage method does not perform any asynchronous work, the async keyword has been used here intentionally for future extensibility.
Next, add a new API controller named TransactionsController and write the following piece of code in there to replace the auto-generated code:
[ApiController]
[Route("api/[controller]")]
public class TransactionsController : ControllerBase
{
private readonly ISidecarMessageQueue _transactionsMessageQueue;
public TransactionsController(ISidecarMessageQueue transactionsMessageQueue)
{
_transactionsMessageQueue = transactionsMessageQueue;
}
[HttpPost]
public async Task<ActionResult> Create([FromBody]
TransactionRequest transactionRequest)
{
if (transactionRequest.TransactionId <= 0)
{
await _transactionsMessageQueue.Enqueue(LogLevel.Error.ToString(),
"Transaction Id must be > 0.");
return BadRequest();
}
if (transactionRequest.TransactionQuantity <= 0)
{
await _transactionsMessageQueue.Enqueue(LogLevel.Error.ToString(),
"Transaction Quantity must be > 0.");
return BadRequest();
}
bool isTransactionTypeValid = Enum.IsDefined(typeof(TransactionType),
transactionRequest.TransactionType);
if (!isTransactionTypeValid)
{
await _transactionsMessageQueue.Enqueue(LogLevel.Error.ToString(),
$"{transactionRequest.TransactionType} " +
$"is an invalid transaction type");
return BadRequest();
}
await _transactionsMessageQueue.Enqueue(LogLevel.Information.ToString(),
$"Created a new transaction record having transaction Id: " +
$"{transactionRequest.TransactionId}");
return Ok(new
{
success = true,
data = transactionRequest,
id = transactionRequest.TransactionId
});
}
}
As shown in the preceding code snippet, the TransactionsController class contains one HttpPost action method. The HttpPost action method accepts a reference to an instance of TransactionRequest record type as a parameter from the request body and is used to create a new transaction. The method also validates incoming data and sends log messages to the message queue.
The complete source code of the TransactionsController class is available in the source code repository.
Create the Sidecar Microservice
The SidecarAPI microservice reads the application logs stored in the shared folder and forwards them to Elasticsearch. The SidecarAPI also provides a HTTP GET endpoint to query the logs stored in Elasticsearch.
Here is how the SidecarAPI works:
- The SidecarBackgroundService polls the log file at regular intervals (i.e., every five seconds as configured in this example).
- The SidecarBackgroundService parses the log text one line at a time.
- The SidecarBackgroundService uses the ElasticSearchClientService to send these logs to Elasticsearch.
Instead of this implementation of the sidecar pattern, you could use the Distributed Application Runtime (Dapr) to handle cross‑cutting concerns. Dapr is an open-source, event-driven runtime that can be used to implement the sidecar pattern in distributed cloud-native applications using any language and runtime.
Create a new record type named LogMessage in a file named LogMessage.cs in the SidecarAPI project to store log metadata, such as log message, log level, and the timestamp, as shown below:
public record LogMessage
{
public required string Id { get; init; }
public required DateTime Timestamp { get; init; }
public required string Message { get; init; }
}
Next, create a new API controller named LogsController and replace the auto-generated code using the following piece of code:
using Microsoft.AspNetCore.Mvc;
using SidecarApi.Services;
[ApiController]
[Route("api/[controller]")]
public class LogsController : ControllerBase
{
private readonly IElasticSearchClientService _elasticSearchClientService;
private readonly ILogger<LogsController> _logger;
public LogsController(IElasticSearchClientService elasticSearchClientService,
ILogger<LogsController> logger)
{
_elasticSearchClientService = elasticSearchClientService;
_logger = logger;
}
[HttpGet]
public async Task<ActionResult<List<LogMessage>>> Get()
{
try
{
var logs = await _elasticSearchClientService.GetAllLogsAsync();
return Ok(logs.ToList());
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to fetch logs from Elasticsearch");
return StatusCode(500);
}
}
}
In this example, we have used a custom file logger to log data to a text file. A better alternative would be to use Serilog, an open-source framework used to implement structured logging. By implementing structured logging in this application, the process of querying the data will be simplified. You can also leverage OpenTelemetry to implement observability by emitting traces and metrics and shipping them via a collector to Elasticsearch.
The LogsController contains only one HTTP GET action method. This action method can be used to retrieve all log records stored in Elasticsearch. The complete source code of the LogsController class is available in the source code repository.
Create the SidecarBackgroundService
In the SidecarAPI microservice, we'll consume the messages stored in the shared folder. The following code listing shows the SidecarBackgroundService class that extends the BackgroundService class and implements the ExecuteAsync method, which will execute at pre-defined intervals (every five seconds in this example).
The following code listing shows the SidecarBackgroundService class:
public class SidecarBackgroundService : BackgroundService
{
private readonly TimeSpan _period = TimeSpan.FromSeconds(5);
private readonly IServiceProvider _serviceProvider;
private readonly ILogger<SidecarBackgroundService> _logger;
private readonly IOptions<SidecarSettings> _settings;
private readonly ConcurrentQueue<string> logs = new ConcurrentQueue<string>();
private readonly int _maxBatchSize;
private readonly int _maxCacheDurationInMinutes;
private readonly IMemoryCache _cache;
public SidecarBackgroundService(
ILogger<SidecarBackgroundService> logger, IServiceProvider serviceProvider,
IOptions<SidecarSettings> settings, IMemoryCache cache)
{
_logger = logger;
_serviceProvider = serviceProvider;
_settings = settings;
_maxBatchSize = settings.Value.MaxBatchSize;
_maxCacheDurationInMinutes =
settings.Value.MaxCacheDurationInMinutes;
_cache = cache;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
using var timer = new PeriodicTimer(_period);
_logger.LogInformation($"LogShipper started. Monitoring {_settings.Value.LogDirectory}");
while (!stoppingToken.IsCancellationRequested &&
await timer.WaitForNextTickAsync(stoppingToken))
{
await SendMessagesToElasticAsync(stoppingToken);
}
}
private async Task SendMessagesToElasticAsync(CancellationToken cancellationToken)
{
var directory = _settings.Value.LogDirectory;
var logFilePattern = _settings.Value.LogFilePattern;
if (string.IsNullOrWhiteSpace(directory) || string.IsNullOrWhiteSpace(logFilePattern)) return;
if (!Directory.Exists(directory)) return;
var files = Directory.GetFiles(directory, logFilePattern);
foreach (var fileName in files)
{
await using var stream = new FileStream(fileName, FileMode.Open,
FileAccess.Read, FileShare.ReadWrite);
using var reader = new StreamReader(stream);
string? text;
using IServiceScope scope = _serviceProvider.CreateScope();
var _elasticSearchClient = scope.ServiceProvider.GetRequiredService<IElasticSearchClientService>();
while ((text = await reader.ReadLineAsync(cancellationToken)) != null)
{
if (string.IsNullOrWhiteSpace(text))
continue;
string[] message = text.Split('|');
string messageKey = message[0].Trim();
if (!_cache.TryGetValue(messageKey, out _))
{
logs.Enqueue(text);
if (logs.Count > _maxBatchSize)
{
while (logs.TryDequeue(out string? str))
{
string[] data = str.Split('|');
string key = data[0].Trim();
LogMessage logMessage = new LogMessage()
{
Id = data[0].Trim(),
Timestamp = DateTime.UtcNow,
Message = str.Substring(data[0].Length + 1).Trim()
};
await _elasticSearchClient.IndexAsync
(logMessage, cancellationToken);
var cacheEntryOptions = new MemoryCacheEntryOptions() .SetSlidingExpiration(
TimeSpan.FromMinutes(_maxCacheDurationInMinutes));
_cache.Set(messageKey, true, cacheEntryOptions);
}
}
}
}
}
}
}
To enable support for in-memory caching in the SidecarAPI, add the following piece of code in the Program.cs file:
builder.Services.AddMemoryCache();
Create the Elasticsearch Client Service
In the SidecarAPI project, the IElasticSearchClientService interface defines a clear abstraction for all Elasticsearch-related operations, such as indexing and querying documents. The ElasticSearchClientService class implements this interface and encapsulates how the application interacts with Elasticsearch.
Create a new interface named IElasticSearchClientService in a file having the same name and replace the default generated code with the following piece of code:
public interface IElasticSearchClientService
{
Task IndexAsync(LogMessage logMessage, CancellationToken ct);
Task IndexBatchAsync(List<LogMessage> entries, CancellationToken ct);
Task<List<LogMessage>> GetAllLogsAsync();
Task DeleteAsyncRequest();
}
Next, create a new class named ElasticSearchClientService that implements the IElasticSearchClientService as shown in the code listing given below:
public class ElasticSearchClientService: IElasticSearchClientService
{
private readonly ILogger<ElasticSearchClientService> _logger;
private readonly ElasticsearchClient _elasticSearchClient;
private readonly ElasticsearchClientSettings _elasticSearchClientSettings;
public ElasticSearchClientService(
ILogger<ElasticSearchClientService> logger,
IOptions<SidecarSettings> settings)
{
_logger = logger;
_elasticSearchClientSettings = new ElasticsearchClientSettings(
new Uri(settings.Value.Elasticsearch.Url))
.Authentication(
new BasicAuthentication(
settings.Value.Elasticsearch.Username,
settings.Value.Elasticsearch.Password));
_elasticSearchClient = new ElasticsearchClient(_elasticSearchClientSettings);
}
public async Task DeleteAsyncRequest()
{
var today = DateTime.UtcNow.ToString("yyyy.MM.dd");
var indexName = $"application-logs-{today}";
var response = await _elasticSearchClient.Indices.DeleteAsync(indexName);
}
public async Task<List<LogMessage>> GetAllLogsAsync()
{
var today = DateTime.UtcNow.ToString("yyyy.MM.dd");
var indexName = $"application-logs-{today}";
var searchResponse = await _elasticSearchClient.SearchAsync<LogMessage>(
s => s.Indices(indexName).Query(q => q.MatchAll()));
return searchResponse.IsValidResponse ? searchResponse.Documents?.ToList() ??
new List<LogMessage>() : new List<LogMessage>();
}
public async Task IndexAsync(LogMessage logMessage, CancellationToken ct)
{
var today = DateTime.UtcNow.ToString("yyyy.MM.dd");
var indexName = $"application-logs-{today}";
var existsResponse = await _elasticSearchClient.Indices.ExistsAsync(indexName, ct);
if (!existsResponse.Exists)
{
var createResponse = await _elasticSearchClient.Indices.CreateAsync(indexName);
if (!createResponse.IsValidResponse)
{
_logger.LogError("Failed to create index: {Error}", createResponse.DebugInformation);
throw new Exception(createResponse.DebugInformation);
}
}
var indexResponse =
await _elasticSearchClient.IndexAsync(logMessage, idx => idx.Index(indexName));
if (!indexResponse.IsValidResponse)
{
throw new Exception(indexResponse.DebugInformation);
}
}
public async Task IndexBatchAsync(List<LogMessage> entries, CancellationToken ct)
{
if (entries.Count == 0) return;
var today = DateTime.UtcNow.ToString("yyyy.MM.dd");
var indexName = $"application-logs-{today}";
var bulkRequest = new BulkRequest(indexName)
{
Operations = new List<IBulkOperation>()
};
foreach (var entry in entries)
{
bulkRequest.Operations.Add(new BulkIndexOperation<LogMessage>(entry));
}
var response = await _elasticSearchClient.BulkAsync(bulkRequest, ct);
if (!response.IsValidResponse)
{
_logger.LogError("Failed to index logs: {Error}", response.DebugInformation);
throw new Exception($"Elasticsearch error: {response.DebugInformation}");
}
_logger.LogInformation("Indexed {Count} logs to {Index}", entries.Count, indexName);
}
}
Configure the TransactionsAPI
You should register the dependencies of the TransactionsAPI in the Program.cs file as shown in the code snippet given below:
using System.Text.Json.Serialization;
using TransactionsAPI;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<IThreadSafeFileLogger, ThreadSafeFileLogger>();
builder.Services.AddSingleton<ISidecarMessageQueue, SidecarMessageQueue>();
builder.Services.AddHostedService<TransactionsBackgroundService>();
builder.Services.AddControllers().AddJsonOptions(options =>
{
options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter());
});
var app = builder.Build();
app.MapControllers();
app.Run();
The following code snippet shows how you can specify the sidecar configuration metadata in the appsettings.json file of the SidecarAPI project:
"Sidecar": {
"LogDirectory": "/app/logs",
"LogFilePattern": "xapi.log",
"MaxBatchSize": 5,
"MaxCacheEntries": 5,
"Elasticsearch": {
"Url": "http://elasticsearch:9200",
"Username": "elastic",
"Password": "changeme"
}
}
Configure the SidecarAPI
The complete source code of the Program.cs file of the SidecarAPI is given below:
using SidecarApi;
using SidecarApi.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMemoryCache();
builder.Services.Configure<SidecarSettings>(
builder.Configuration.GetSection("Sidecar"));
builder.Services.AddScoped
<IElasticSearchClientService, ElasticSearchClientService>();
builder.Services.AddHostedService<SidecarBackgroundService>();
builder.Services.AddControllers();
var app = builder.Build();
app.MapControllers();
app.Run();
Use Containerization
You should take advantage of containers when implementing sidecar design for better isolation, modularity, and reusability. Although the application and the sidecar containers are isolated, they share the same lifecycle, network, and often the same storage as well.

Figure 3: The Application and the sidecar containers in execution
Dockerize the services
You should dockerize both the services by creating a Dockerfile in both the projects we created earlier, i.e., the TransactionsAPI and SidecarAPI. Since you’ve opted for containerization support when creating the two projects, a Docker file will be created in each of them by default.
Here's the source code of the Dockerfile of the SidecarAPI service (i.e., the sidecar).
FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS base
RUN mkdir -p /app/logs && chmod 750 /app/logs
USER $APP_UID
WORKDIR /app
EXPOSE 8081
# This stage is used to build the service project
FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["SidecarApi/SidecarApi.csproj", "SidecarApi/"]
RUN dotnet restore "./SidecarApi/SidecarApi.csproj"
COPY . .
WORKDIR "/src/SidecarApi"
RUN dotnet build "./SidecarApi.csproj" -c $BUILD_CONFIGURATION -o /app/build
# This stage is used to publish the service project to be copied to the final stage
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./SidecarApi.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
# This stage is used in production or when running from VS in regular mode (Default when not using the Debug configuration)
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SidecarApi.dll"]
The Dockerfile of the Transactions microservice should have the following piece of code:
FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS base
RUN mkdir -p /app/logs && chmod 750 /app/logs # Linux permission
USER $APP_UID
WORKDIR /app
EXPOSE 8080
# This stage is used to build the service project
FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
RUN mkdir -p /app/logs && chmod 750 /app/logs
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["TransactionsApi/TransactionsApi.csproj", "TransactionsApi/"]
RUN dotnet restore "./TransactionsApi/TransactionsApi.csproj"
COPY . .
WORKDIR "/src/TransactionsApi"
RUN dotnet build "./TransactionsApi.csproj" -c $BUILD_CONFIGURATION -o /app/build
# This stage is used to publish the service project to be copied to the final stage
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./TransactionsApi.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
# This stage is used in production or when running from VS in regular mode (Default when not using the Debug configuration)
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TransactionsApi.dll"]
Docker File and Docker Compose File
A Docker Compose file is a YAML-based configuration file enabling you to declaratively specify how to run multiple containers cohesively. You can use it to configure all your services, networks, and volumes in a single file, instead of running multiple Docker commands to run your Docker containers.
While a Docker file helps you configure and build individual images in an application, a Docker Compose file defines how multiple images can run together cohesively as services in a multi-container application. Docker Compose helps streamline containerised applications. It gives you more granular and simplified control over your containers, makes collaboration and development much more efficient, and lets your applications run easily in whatever environment you need. Essentially, using Docker Compose is a great way to configure all the interdependent services your application needs (databases, message queues, caches, web service APIs, etc.) in a single file. You can then spin up one or more containers with a single command using the Docker Compose command-line tool.
Create the Docker Compose file
To deploy both the containers at the same time, you should create a docker-compose.yml file with the following content inside:
services:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.15.1
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ports:
- "9200:9200"
volumes:
- esdata:/usr/share/elasticsearch/data
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 40s
transactions-api:
build:
context: .
dockerfile: TransactionsApi/Dockerfile
container_name: transactions-api
ports:
- "8080:8080"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:8080
volumes:
- ./logs:/app/logs
networks:
- app-network
depends_on:
elasticsearch:
condition: service_healthy
sidecar-api:
build:
context: .
dockerfile: SidecarApi/Dockerfile
container_name: sidecar-api
ports:
- "8081:8081"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:8081
volumes:
- ./logs:/app/logs
networks:
- app-network
depends_on:
elasticsearch:
condition: service_healthy
networks:
app-network:
driver: bridge
volumes:
esdata:
Securing the Endpoints
The following code snippet shows how you can add health checks for Elasticsearch in the Program.cs file:
builder.Services
.AddHealthChecks()
.AddCheck<ElasticHealthCheck>("elasticsearch");
You can also use the authentication and authorization provided by the ASP.NET Core to secure the endpoints, as shown below.
[Authorize(Policy = "ReadOnly")]
[HttpGet]
public async Task<ActionResult<List<LogMessage>>> Get()
{
//Code omitted for brevity
}
[Authorize(Policy = "CanDelete")]
[HttpDelete]
public async Task Delete()
{
//Code omitted for brevity
}
We've skipped all this in this example because of simplicity and brevity.
Run the application
Finally, run the Docker Compose file using the following piece of code:
docker-compose up –build
Figure 4 shows the Docker Compose command in execution.

Figure 4: The Docker Compose command in execution
When the TransactionsAPI microservice is in execution, it will generate log messages and then send them to an in-memory collection. A background service will then store these messages in a text file residing in a shared folder. Figure 5 shows how you can invoke the Transactions microservice using Postman.
Incidentally, Postman is a popular API platform used for building, testing, and managing APIs.

Figure 5: Invoking the Create endpoint of the Transactions microservice using Postman
The sidecar microservice will then read the log messages from the text file residing in the shared folder and send them to Elasticsearch.
You can retrieve the logs stored in Elasticsearch by calling the HTTP Get endpoint of the logs controller pertaining to the sidecar microservice, as shown in Figure 6.

Figure 6: Displaying the saved messages in Elasticsearch
Using Kubernetes Pods
You can improve this implementation by using Kubernetes, which can serve as a "runtime fabric" to make it scalable, resilient, and operable in production for your distributed, cloud-native applications. In this implementation, the two microservices, TransactionsAPI and SidecarAPI, will run on separate containers on the same network. They communicate over network addresses and share volumes via bind mounts.
While this will work, an ideal choice here will be an implementation of the sidecar pattern using Kubernetes pods. It should be noted here that the true sidecar pattern resides in Kubernetes Pods, where containers share localhost networking and volumes.
Performance and Scalability Considerations
It should be noted that this implementation adds certain performance and latency overheads. For example, the file I/O operations in the TransactionsAPI service will add an additional overhead because of read and write operations from and to the disk. You should also avoid recreating the indexes every time.
Instead, you can create the index once at startup. You can also send messages to Elasticsearch in a batch IndexBatchAsync to write the messages in a batch thereby improving performance. Lastly, you can use OpenTelemetry in this application to aggressively capture metrics and then analyze them as needed.
Selecting the Right Approach to Implement the Sidecar Pattern
You can implement the sidecar pattern in several ways:
- Custom
A custom sidecar is a good choice when you want a simple approach with total control and flexibility in your sidecar implementation without requiring any external dependencies. - Dapr
Distributed Application Runtime (Dapr), helps you to implement the sidecar pattern with features such as, service-to-service communication, maintain state, and process events in a distributed application environment. This approach will eliminate the need to write your own custom code so you can focus on delivering business value rather than writing boilerplate code. - Serilog with the Elasticsearch sink
This is a good choice if your application is built using .NET and you want to directly manage the structure and format of your logs as well as write them directly to Elasticsearch without the need of an intermediary log aggregator agent. - stdout and the Kubernetes DaemonSet
You can also implement the sidecar pattern using stdout and the Kubernetes DaemonSet which can be useful as a simple, resource efficient technique in small to medium sized cluster environments. A daemonset guarantees that a particular pod runs on each node in a Kubernetes cluster.
The Sidecar Pattern is a Kubernetes Construct
The sidecar pattern is fundamentally a Kubernetes native concept where containers share the same Pod, localhost, and lifecycle. The Docker Compose example is only a useful approximation for local development, but Kubernetes is actually the canonical implementation of the sidecar pattern, while Docker Compose is used only as a local development approach.
Conclusion
Although there are several benefits, the sidecar design pattern, like any software pattern, is only useful when it is correctly implemented. You can extend the application discussed in this article to illustrate how the sidecar container can collect and consolidate logs and monitoring metrics for each microservice, improving the manageability, usability, performance, and functionality of the microservices, and also detecting and troubleshooting runtime issues.