BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies

Key Takeaways

  • Cloud native, best practices, open-source, terraform & azure rm, github
  • The cloud native is not just a tool but an approach in building from scratch an application taking advantage of many modern patterns, frameworks, tools and DevOps methodologies.
  • Distributed systems are more and more popular but as they bring undoubted benefits, but at the same time increase complexity leading developers to face new challenges - following best practices is essential.
  • C#, .NET and other Microsoft products have relatively recently become open source and they can be used with other open source software such as Linux.
  • Being Terraform, an open source IaC tool, widely used and perfectly integrated with Microsoft Azure, it is the prime choice to deploy complex applications in the Microsoft cloud service.
  • GitHub helps developers to implement DevOps practices automating application builds, tests and releases using Continuous Integration and Deployment (CI/CD) techniques.

Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. My intention with this article is to explain, in a pragmatic way, how to build, deploy, run, and monitor a simple cloud-native application on Microsoft Azure using open-source technologies.

This article demonstrates how to build a cloud-native application replicating real-case scenarios through a demo application—guiding the reader step by step.

Cloud-native applications

Without any doubt, one of the latest trends in software development is the “cloud native” term. But exactly, what is a cloud-native application?

Cloud-native applications are applications built around various cloud technologies or services hosted in the (private/public) cloud. Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. Often, they are distributed systems (commonly designed as microservices) but they also use DevOps methodologies to automate application building and deployment that can be done at any time on demand. Usually these applications provide API through standard protocols such as REST or gRPC and they are interoperable through standard tools, such as Swagger (OpenAPI).

The demo app is quite simple, but at the same time it involves a number of factors and technologies combined to replicate the fundamentals of a real-case scenario. This demo doesn’t include anything about authentication and authorization because, in my opinion, it would bring too much complexity that is not required to explain the topic of this article.

Simple Second-Hand Store application

Simple Second-Hand Store (SSHS) is the name of the demo app described in this article.

SSHS system design overview.

Application features

SSHS is a simple cloud-native application that sells second-hand products. Users can create, read, update, and delete products. When a product is added to the platform, the owner of that product receives an email confirming the operation’s success.

Decomposing an application

When developing a microservice architecture, decomposing the business requirements into a set of services is actually the first step. A set of these principles are:

  • Services should follow the open-closed principles:
    • A software component should be closed to changes but open to extensibility
    • If you move this definition to a distributed architecture, it means that a change in a component (service) should not affect other components
  • Services should be loosely coupled; services must be loosely coupled among them in order to give the maximum flexibility in front of a change or a new functionality
  • Services should be autonomous: if a service fails, others shouldn’t; if a service scales in/out, others don’t need to

These simple principles help to build consistent and robust applications, gaining all the advantages that a distributed system can provide. Keep in mind that designing and developing distributed applications is not an easy task, and ignoring a few rules could lead to both monolithic and microservices issues. The next section explains by examples how to put them in practice.

SSHS decomposition

It is easy to identify two contexts in the Simple Second-Hand Store (SSHS) application: the first one is in charge of handling products—creation and persistence. The second context is all about notification and is actually stateless.

Coming down to the application design, there are two microservices:

  • ProductCatalog (Pc)—provides some APIs through the REST protocol that allow the client to create, read, update, and delete (CRUD) products on a database.
  • Notifications (Nts)—when a new product is added to the Pc repository, Nts service sends an email to the owner of that product.

Communication between microservices

At a higher level, microservices could be considered as a group of subsystems that are composing a single application. And, as per traditional applications, components need to communicate with other components. In a monolithic application you can do it by adding some abstraction between different layers, but of course it is not possible in a microservice architecture since the code base is not the same. So how can microservices communicate? The easiest way to do it is through HTTP protocol: each service exposes some REST APIs for the other one and they can easily communicate, but although it may sound good at the first impression some dependencies are put in the system. For example, if service A needs to call service B to reply to the client, what happens if service B is down or just slow? Why does service B’s performances affect service A, spreading the outage to the entire application?

This is where asynchronous communication patterns come into play—to help keep components loosely coupled. Using asynchronous patterns, the caller doesn’t need to wait for a response from the receiver, but instead, it throws a “fire and forget” event, and then someone will catch this event to perform some action. I used the word someone because the caller has no idea who is going to read the event—maybe no one will catch it.

This pattern is generally called pub/sub, where a service publishes events and others may subscribe to events. Events are generally published on another component called event bus that works like a FIFO (first in-first out) queue.

Of course, there are more sophisticated patterns other than the FIFO queue, even if it is still used a lot in real environments. For example, an alternative scenario may have consumers subscribing to a topic rather than a queue, copying and consuming messages belonging to that topic and ignoring the rest. A topic is, generally speaking, a property of the message, such as the subject in AMQP (Advanced Message Queuing Protocol) terms.

Using asynchronous patterns, service B can react to some events occurring in service A—but service A doesn’t know anything about who the consumers are and what they are doing. And obviously its performance is not affected by other services. They are completely independent from each other.

NOTE: Unfortunately, sometimes using an asynchronous pattern is not possible, and even if using synchronous communication is an antipattern, there is no alternative. This shall not become an excuse to build things quicker, but keep in mind that in some specific scenarios, it may happen. Do not feel that guilty if you have no alternatives.

SSHS communication

In the SSHS application, microservices don’t need to have direct communication since the Nts service must react to some events that occur on the Pc service. This can be clearly done as an asynchronous operation, through a message on a queue.

Data persistent in a microservice architecture

For the same reasons exposed in the “Communication between microservices” paragraph, to keep services independent from each other, a different storage for each service is required. It doesn’t matter if a service has one or more storages using a multitude of technologies (often they have both SQL and NoSQL), each service must have the exclusive access to its repository; not only for performance reasons, but also for data integrity and normalization. The business domain between services could be very different, and each service needs its own database schema that could be very different from one microservice to another. On the other hand, the application is usually decomposed following business bounded context, and it is quite normal to see schemas diverge over time, even if at the beginning they may look the same. Summarizing, merging everything together leads to the monolithic application issues—so why use a distributed system?

SSHS data persistent

Notifications service doesn’t have any data to persist, while the ProductCatalog offers some CRUD APIs to manage uploaded products. These are persisted in a SQL database since the schema is well defined and the flexibility given by a NoSQL storage is not needed in this scenario.

Technologies used

Both services are ASP.NET applications running on .NET 6 that can be built and deployed using Continuous Integration (CI) and deployment techniques. In fact, GitHub hosts the repository and the build and deployment pipelines are scripted on top of GitHub Actions. Cloud infrastructure is scripted using a declarative approach, to provide a full IaC (Infrastructure as a Code) experience using Terraform. Pc service stores data on a Postgresql database and communicates with Nts service using a queue on an event bus. Sensitive data such as connection strings are stored in a secure place on Azure and not checked in the code repository.

Building SSHS

Before starting: the following sections don’t explain each step in detail (such as creating solutions and projects) and are aimed at developers that are familiar with Visual Studio or similar. However, the GitHub repository link is at the end of this post.

To get started with the SSHS development, first create the repository and define the folders structure. SSHS repository is defined as:

  • .github
    • workflows
      • build-deploy.yml
  • src
    • Notifications
      • [project files]
      • Notifications.csproj
    • ProductCatalog
      • [project files]
      • ProductCatalog.csproj
    • .editorconfig
    • Directory.Build.props
    • sshs.sln
  • terraform
    • main.tf
  • .gitignore
  • README.md

Just focus on few things for now:

  • there is a folder dedicated to GitHub: it contains a yml file which defines the CI and CD pipelines
  • terraform folder has the script to deploy resources to Azure (IaC)
  • in src there is the source code
  • Directory.Build.props defines properties which are inherited by all the csprojs
  • .editorconfig file works like a linter instead—I wrote about them and how to share the same settings in a team on my blog 
  • .gitignore to set up Git ignored files

NOTE: Disable the nullable flag in csproj file, usually enabled by default in Net Core 6 project templates.

Product Catalog service

ProductCatalog service needs to provide APIs to manage products, and to better support this scenario we use Swagger (Open API) to give some documentation to consumers and make the development experience easier.

Then there are dependencies: database and event bus. To get access to the database, it is going to use Entity Framework.

Finally, a secure storage service—Azure KeyVault—is required to safely store connection strings.

Create the project

The new ASP.NET Core 6 application Visual Studio templates don’t provide a Startup class anymore, but instead, everything is in a Program class. Well, as discussed in the ProductCatalog deployment paragraph there is a bug about using this approach, so let’s create a Startup class:

namespace ProductCatalog
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {

        }

        public void Configure(IApplicationBuilder app)
        {
            
        }
    }
}

Then replace the Program.cs content with the following code:

var builder = WebApplication.CreateBuilder(args);

var startup = new Startup(builder.Configuration);
startup.ConfigureServices(builder.Services);

WebApplication app = builder.Build();

startup.Configure(app/*, app.Environment*/);

app.Run();

CRUD APIs

The next step is about writing some simple CRUD APIs to manage the products. Here’s the controller definition:

namespace ProductCatalog.Controllers
{
    [AllowAnonymous]
    [ApiController]
    [Route("api/product/")]
    public class ProductsController : ControllerBase
    {
        private readonly IProductService _productService;

        public ProductsController(
            IProductService productService)
        {
            _productService = productService;
        }
        
        [HttpGet]
        [Route("product")]
        public async Task<IActionResult> GetAllProducts()
        {
            var dtos = await _productService.GetAllProductsAsync();
            return Ok(dtos);
        }

        [HttpGet]
        [Route("{id}")]
        public async Task<IActionResult> GetProduct(
            [FromRoute] Guid id)
        {
            var dto = await _productService.GetProductAsync(id);
            return Ok(dto);
        }

        [HttpPost]
        [Route("product")]
        public async Task<IActionResult> AddProduct(
            [FromBody] CreateProductRequest request)
        {
            Guid productId = await _productService.CreateProductAsync(request);

            Response.Headers.Add("Location", productId.ToString());
            return NoContent();
        }

        [HttpPut]
        [Route("{id}")]
        public async Task<IActionResult> UpdateProduct(
            [FromRoute] Guid id,
            [FromBody] UpdateProductRequest request)
        {
            await _productService.UpdateProductAsync(id, request);
            return NoContent();
        }

        [HttpDelete]
        [Route("{id}")]
        public async Task<IActionResult> DeleteProduct(
            [FromRoute] Guid id)
        {
            await _productService.DeleteProductAsync(id);
            return Ok();
        }
    }
}

The ProductService definition is:

namespace ProductCatalog.Services
{
    public interface IProductService
    {
        Task<IEnumerable<ProductResponse>> GetAllProductsAsync();

        Task<ProductDetailsResponse> GetProductAsync(Guid id);

        Task<Guid> CreateProductAsync(CreateProductRequest request);

        Task UpdateProductAsync(Guid id, UpdateProductRequest request);

        Task DeleteProductAsync(Guid id);
    }

    public class ProductService : IProductService
    {
        public Task<Guid> CreateProductAsync(CreateProductRequest request)
        {
            throw new NotImplementedException();
        }

        public Task DeleteProductAsync(Guid id)
        {
            throw new NotImplementedException();
        }

        public Task<IEnumerable<ProductResponse>> GetAllProductsAsync()
        {
            throw new NotImplementedException();
        }

        public Task<ProductDetailsResponse> GetProductAsync(Guid id)
        {
            throw new NotImplementedException();
        }

        public Task UpdateProductAsync(Guid id, UpdateProductRequest request)
        {
            throw new NotImplementedException();
        }
    }
}

And finally, defines the (very simple) DTOs classes:

public class ProductResponse
{
    [JsonPropertyName("id")]
    public Guid Id { get; set; }

    [JsonPropertyName("name")]
    public string Name { get; set; }
}

public class UpdateProductRequest
{
    [JsonPropertyName("name")]
    public string Name { get; set; }

    [JsonPropertyName("price")]
    public decimal Price { get; set; }

    [JsonPropertyName("owner")]
    public string Owner { get; set; }
}

public class ProductDetailsResponse
{
    [JsonPropertyName("id")]
    public Guid Id { get; set; }

    [JsonPropertyName("name")]
    public string Name { get; set; }

    [JsonPropertyName("price")]
    public decimal Price { get; set; }

    [JsonPropertyName("owner")]
    public string Owner { get; set; }
}

public class CreateProductRequest
{
    [JsonPropertyName("name")]
    public string Name { get; set; }

    [JsonPropertyName("price")]
    public decimal Price { get; set; }

    [JsonPropertyName("owner")]
    public string Owner { get; set; }
}

The Owner property should contain the email address to notify when a product is added to the system. I haven’t added any kind of validation since it is a huge topic not covered in this post.

Then, register the ProductService in the IoC container using services.AddScoped<IProductService, ProductService>(); in the Startup class.

Swagger (Open API)

Often cloud-native applications use Open API to make APIs testing and documentation easier. The official definition is:

The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

Long version short: OpenAPI is a nice UI to quickly consume APIs and read their documentation, perfect for development and testing environments, NOT for production. However, since this is a demo app, I kept it enabled in all the environments. In order to include an attention flag, I put some commented-out code to exclude it from the Production environment.

To add Open API support, install the Swashbuckle.AspNetCore NuGet package in the Pc project and update the Startup class:

public void ConfigureServices(IServiceCollection services)
{
    //if (env.IsDevelopment())
    {
        services.AddControllers();
        services.AddEndpointsApiExplorer();
        services.AddSwaggerGen(options =>
        {
            var contact = new OpenApiContact
            {
                Name = Configuration["SwaggerApiInfo:Name"],
            };

            options.SwaggerDoc("v1", new OpenApiInfo
            {
                Title = $"{Configuration["SwaggerApiInfo:Title"]}",
                Version = "v1",
                Contact = contact
            });

            var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
            var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
            options.IncludeXmlComments(xmlPath);
        });
    }
}

public void Configure(IApplicationBuilder app)
{
    //if (env.IsDevelopment()))
    {
      app.UseSwagger();
      app.UseSwaggerUI(options =>
      {
          options.SwaggerEndpoint("/swagger/v1/swagger.json", "API v1");
          options.RoutePrefix = string.Empty;
          options.DisplayRequestDuration();
      });

      app.UseRouting();

      app.UseEndpoints(endpoints =>
      {
          endpoints.MapControllerRoute(
              name: "default",
              pattern: "{controller=Home}/{action=Index}/{id?}");

          endpoints.MapControllers();
      });
    }
}

Enable the XML documentation file generation in the csproj. These documentation files are read by Swagger and shown in UI:

<ItemGroup>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
</ItemGroup>

NOTE: Add into the appsettings.json file a section named SwaggerApiInfo with two inner properties with a value of your choice: Name and Title.

Add some documentation to APIs, just like in the following example:

/// <summary>
/// APIs to manage products
/// </summary>
[AllowAnonymous]
[ApiController]
[Route("api/" + "product/")]
[Consumes(MediaTypeNames.Application.Json)]
[Produces(MediaTypeNames.Application.Json)]
public class ProductsController : ControllerBase
{ }

/// <summary>
/// Get the given product
/// </summary>
/// <remarks>
/// Sample request:
///
///     GET /api/product/{id}
/// 
/// </remarks>
/// <param name="id">Product id</param>
/// <response code="200">Product details</response>
[HttpGet]
[Route("{id}")]
[ProducesResponseType(typeof(ProductDetailsResponse), StatusCodes.Status200OK)]
public async Task<IActionResult> GetProduct(
    [FromRoute] Guid id)
{ /* Do stuff */}

Now, run the application and navigate to localhost:<port>/index.html. Here, you can see how Swagger UI shows all the details specified in the C# code documentation: APIs description, schemas of accepted types, status codes, supported media type, a sample request, and so on. This is extremely useful when working in a team.

GZip compression

Even though this is just an example, it is a good practice to add the GZip compression to API’s response in order to improve performance. Open the Startup class and add the following lines:

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<GzipCompressionProviderOptions>(options =>
                options.Level = System.IO.Compression.CompressionLevel.Optimal);

    services.AddResponseCompression(options =>
    {
        options.EnableForHttps = true;
        options.Providers.Add<GzipCompressionProvider>();
    });
}

public void Configure(IApplicationBuilder app)
{
  app.UseResponseCompression();
}

Error handling

To handle errors, custom exceptions, and a custom middleware is used:

public class BaseProductCatalogException : Exception
{ }

public class EntityNotFoundException : BaseProductCatalogException
{ }

namespace ProductCatalog.Models.DTOs
{
    public class ApiResponse
    {      
        public ApiResponse(string message)
        {
            Message = message;
        }
        
        [JsonPropertyName("message")]
        public string Message { get; }
    }
}
Update the Startup class:
public void Configure(IApplicationBuilder app)
{
    app.UseExceptionHandler((appBuilder) =>
    {
        appBuilder.Run(async context =>
        {
            var exceptionHandlerPathFeature = context.Features.Get<IExceptionHandlerPathFeature>();
            Exception exception = exceptionHandlerPathFeature?.Error;

            context.Response.StatusCode = exception switch
            {
                EntityNotFoundException => StatusCodes.Status404NotFound,
                _ => StatusCodes.Status500InternalServerError
            };

            ApiResponse apiResponse = exception switch
            {
                EntityNotFoundException => new ApiResponse("Product not found"),
                _ => new ApiResponse("An error occurred")
            };

            context.Response.ContentType = MediaTypeNames.Application.Json;
            await context.Response.WriteAsync(JsonSerializer.Serialize(apiResponse));
        });
    });
}

Entity Framework

The Pc application needs to persist data—the products—in storage. Since the Product entity has a specific schema, a SQL database suits this case scenario. In particular, Postgresql is an open-source transactional database offered to as a PaaS service from Azure.

Entity Framework is an ORM, a tool that makes the object translation between SQL and the OOP language easier. Even if SSHS does perform very simple queries, the goal is to simulate a real scenario where ORMs—and eventually MicroORMs, such as Dapper—are heavily used.

Before starting, run a Postgresql local instance for the development environment. My advice is to use Docker—especially for Windows users. Now, install Docker if you don’t have it yet, and run docker run -p 127.0.0.1:5432:5432/tcp --name postgres -e POSTGRES_DB=product_catalog -e POSTGRES_USER=sqladmin -e POSTGRES_PASSWORD=Password1! -d postgres.

For more information, you can refer to the official documentation 

Once the local database is running properly, it is time to get started with Entity Framework for Postgresql. Let’s install these NuGet packages:

  • EFCore.NamingConventions, to use Postgresql conventions when generating names and properties
  • Microsoft.EntityFrameworkCore.Design, for design time Entity Framework logic
  • Microsoft.EntityFrameworkCore.Proxies, to lazy load columns
  • Microsoft.EntityFrameworkCore.Tools, to manage migrations and scaffold DbContexts
  • Npgsql.EntityFrameworkCore.PostgreSQL, for Postgresql dialect actually

Define entities—the Product class:

namespace ProductCatalog.Models.Entities
{
    public class Product
    {
        /// <summary>
        /// Ctor reserved to EF
        /// </summary>
        [ExcludeFromCodeCoverage]
        protected Product()
        { }

        public Product(
            string name,
            decimal price,
            string owner)
        {
            Name = name;
            Price = price;
            Owner = owner;
        }

        public Guid Id { get; protected set; }

        public string Name { get; private set; }

        public decimal Price { get; private set; }

        public string Owner { get; private set; }

        internal void UpdateOwner(string owner)
        {
            Owner = owner;
        }

        internal void UpdatePrice(decimal price)
        {
            Price = price;
        }

        internal void UpdateName(string name)
        {
            Name = name;
        }
    }
}

Create a DbContext class—that will be the gateway to get access to the database—and define the mapping rules between the SQL objects and the CLR objects:

namespace ProductCatalog.Data
{
    public class ProductCatalogDbContext : DbContext
    {
        public ProductCatalogDbContext(
            DbContextOptions<ProductCatalogDbContext> options)
            : base(options)
        { }

        public DbSet<Product> Products { get; set; }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            base.OnModelCreating(modelBuilder);

            modelBuilder.ApplyConfigurationsFromAssembly(Assembly.GetExecutingAssembly());
        }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            base.OnConfiguring(optionsBuilder);

            optionsBuilder
               .UseLazyLoadingProxies()
               .UseNpgsql();
        }
    }
}

namespace ProductCatalog.Data.EntityConfigurations
{
    public class ProductEntityConfiguration : IEntityTypeConfiguration<Product>
    {
        public void Configure(EntityTypeBuilder<Product> builder)
        {
            builder.ToTable("product_catalog");

            builder.HasKey(dn => dn.Id);
            builder.Property(dn => dn.Id)
                .ValueGeneratedOnAdd();

            builder.Property(dn => dn.Name)
                .IsRequired();

            builder.Property(dn => dn.Price)
                .IsRequired();

            builder.Property(dn => dn.Owner)
                .IsRequired();
        }
    }
}

The DbSet<Product> property represents as an in-memory collection that the data persisted on storage; the OnModelCreating method override scans the running assembly looking for all the classes that implement the IEntityTypeConfiguration interface to apply custom mapping. Instead, the OnConfiguring override enables the Entity Framework Proxy to lazy load relationships between tables. This isn’t the case since we have a unique table, but it is a nice tip to improve performance in a real scenario. The feature is given by the NuGet package Microsoft.EntityFrameworkCore.Proxies.

Finally the ProductEntityConfiguration class defines some mapping rules:

  • builder.ToTable("product_catalog"); gives the name to the table; if not specified, it generates the table name from the entity name—Product in this case—using the Postgresql naming conventions, thanks to EFCore.NamingConventions package
  • builder.HasKey(dn => dn.Id); set the Id property as primary key
  • .ValueGeneratedOnAdd(); is saying to generate a new Guid automatically when an object is created in the database*
  • .IsRequired() is adding the SQL Not NULL constraint

*It is important to remind that the Guid is generated after the creation of the SQL object. If you need to generate the Guid before the SQL object you can use HiLo—more info here.

Finally, update the Startup class with the latest changes:

public void ConfigureServices(IServiceCollection services)
{
  services.AddDbContext<ProductCatalogDbContext>(opt =>
  {
      var connectionString = Configuration.GetConnectionString("ProductCatalogDbPgSqlConnection");
      opt.UseNpgsql(connectionString, npgsqlOptionsAction: sqlOptions =>
      {
          sqlOptions.EnableRetryOnFailure(
              maxRetryCount: 4,
              maxRetryDelay: TimeSpan.FromSeconds(Math.Pow(2, 3)),
              errorCodesToAdd: null);
      })
      .UseSnakeCaseNamingConvention(CultureInfo.InvariantCulture);
  });
}

The database connection string is sensitive information, so it shouldn’t be stored in the appsettings.json file. For debugging purposes, UserSecrets can be used. It is a feature provided by the dotnet framework to store sensitive information that shouldn’t be checked in the code repository. If you are using Visual Studio, right click on the project and select Manage user secrets; if you are using any other editor, open the terminal and navigate to the csproj file location. Then type dotnet user-secrets init. The csproj file now contains a UserSecretsId node with a Guid to identify the project secrets.

There are three different ways to set a secret now:

  • if you have been using Visual Studio, you should already have the secrets.json file open after right clicking
  • using the command dotnet user-secrets set "Key" "12345" or dotnet user-secrets set "Key" "12345" --project "src\WebApp1.csproj"
  • manually opening the file in one of these folders even if you may not find that file until you actually add a secret in it:
    • Windows: %APPDATA%\Microsoft\UserSecrets\<user_secrets_id>\secrets.json
    • Unix: ~/.microsoft/usersecrets/<user_secrets_id>/secrets.json

The secret.json should look as follow:

{
  "ConnectionStrings": {
    "ProductCatalogDbPgSqlConnection": "Host=localhost;Port=5432;Username=sqladmin;Password=Password1!;Database=product_catalog;Include Error Detail=true"
  }
}

Let’s get down with the ProductService implementation:

public class ProductService : IProductService
{
    private readonly ProductCatalogDbContext _dbContext;
    private readonly ILogger<ProductService> _logger;

    public ProductService(
        ProductCatalogDbContext dbContext,
        ILogger<ProductService> logger)
    {
        _dbContext = dbContext;
        _logger = logger;
    }

    public async Task<Guid> CreateProductAsync(CreateProductRequest request)
    {
        var product = new Product(
            request.Name,
            request.Price,
            request.Owner);

        _dbContext.Products.Add(product);

        await _dbContext.SaveChangesAsync();

        return product.Id; // Generated at the SaveChangesAsync
    }

    public async Task DeleteProductAsync(Guid id)
    {
        Product product = await _dbContext.Products.FirstOrDefaultAsync(p => p.Id == id);
        if (product == null)
            throw new EntityNotFoundException();

        _dbContext.Products.Remove(product);

        await _dbContext.SaveChangesAsync();
    }

    public async Task<IEnumerable<ProductResponse>> GetAllProductsAsync()
    {
        List<Product> products = await _dbContext.Products.ToListAsync();

        var response = new List<ProductResponse>();

        foreach (Product product in products)
        {
            var productResponse = new ProductResponse
            {
                Id = product.Id,
                Name = product.Name,
            };

            response.Add(productResponse);
        }

        return response;
    }

    public async Task<ProductDetailsResponse> GetProductAsync(Guid id)
    {
        Product product = await _dbContext.Products.FirstOrDefaultAsync(p => p.Id == id);
        if (product == null)
            throw new EntityNotFoundException();

        var response = new ProductDetailsResponse
        {
            Id = product.Id,
            Name = product.Name,
            Owner = product.Owner,
            Price = product.Price,
        };

        return response;
    }

    public async Task UpdateProductAsync(Guid id, UpdateProductRequest request)
    {
        Product product = await _dbContext.Products.FirstOrDefaultAsync(p => p.Id == id);
        if (product == null)
            throw new EntityNotFoundException();

        product.UpdateOwner(request.Owner);
        product.UpdatePrice(request.Price);
        product.UpdateName(request.Name);

        _dbContext.Products.Update(product);

        await _dbContext.SaveChangesAsync();
    }
}

The next step is about creating the database schema through migrations. The Migrations tool incrementally updates a checked-in file database to keep it in sync with the application data model while preserving existing data. The details of the applied migrations to the database are stored in a table called "__EFMigrationHistory". This information is then used to execute not-applied migrations only to the database specified in the connection string.

To define the first migration, open the CLI in the csproj folder and run

dotnet-ef migrations add "InitialMigration"—it is stored in Migration folder. Then update the database: dotnet-ef database update with the migration just created.

NOTE: If this is the first time you are going to run migrations, install the CLI tool first using dotnet tool install --global dotnet-ef.

KeyVault

As I’ve said, user secrets just works in a Development environment so Azure KeyVault support must be added. Install the package Azure.Identity and edit the Program.cs:

var builder = WebApplication.CreateBuilder(args);
builder.Host.ConfigureAppConfiguration((hostingContext, configBuilder) =>
{
    if (hostingContext.HostingEnvironment.IsDevelopment())
        return;

    configBuilder.AddEnvironmentVariables();
    configBuilder.AddAzureKeyVault(
        new Uri("https://<keyvault>.vault.azure.net/"),
        new DefaultAzureCredential());
});

where <keyvault> is the KeyVault name that will be declared in the Terraform scripts later.

Health Checks

The ASP.NET Core SDK offers libraries for reporting application health, through REST endpoints. Install the Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore NuGet package and configure the endpoints in the Startup class:

public void ConfigureServices(IServiceCollection services)
{
    services
        .AddHealthChecks()
        .AddDbContextCheck<ProductCatalogDbContext>("dbcontext", HealthStatus.Unhealthy);
}

public void Configure(IApplicationBuilder app)
{
    app
        .UseHealthChecks("/health/ping", new HealthCheckOptions { AllowCachingResponses = false })
        .UseHealthChecks("/health/dbcontext", new HealthCheckOptions { AllowCachingResponses = false });
}

The code above adds two endpoints: at the /health/ping endpoint the application responses with the health status of the system. Default values are Healthy, Unhealthy or Degraded, but they can be customized. Instead, the /health/dbcontext endpoint gives back the current Entity Framework DbContext status, so basically, if the app can communicate with the database. Note that the NuGet package mentioned above is the one specific for Entity Framework, that internally refers to Microsoft.Extensions.Diagnostics.HealthChecks. If you don’t use EF, you can use this one only.

You can get more info in the official documentation .

Docker

The last step to complete the Pc project is to add a Dockerfile file. Since Pc and Nts are independent projects, it is important to have a single Dockerfile per project. Create a folder Docker in the ProductCatalog project, define a .dockerignore file and the Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS base
WORKDIR /app
COPY . .

RUN dotnet restore \
    ProductCatalog.csproj \

RUN dotnet publish \
    --configuration Release \
    --self-contained false \
    --runtime linux-x64 \
    --output /app/publish \
    ProductCatalog.csproj \

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as final
WORKDIR /app
COPY --from=base /app/publish .
EXPOSE 80
ENTRYPOINT ["dotnet", "ProductCatalog.dll"]

NOTE: Don’t forget to add a .dockerignore file as well. On the internet, there are plenty of examples based on specific technologies—.NET Core in this case.

NOTE: If your Docker build stucks on the dotnet restore command, you faced a bug documented here. To fix it, add this node to the csproj:

  <ItemGroup>
    <Watch Include="..\**\*.env" Condition=" '$(IsDockerBuild)' != 'true' " />
  </ItemGroup>

and add /p:IsDockerBuild=true to both restore and publish commands in the Dockerfile as explained in this comment.

To try this Dockerfile locally, navigate with your CLI to the project folder and run:

docker build -t productcatalog -f Docker\Dockerfile ., where:

  • -t gives the name to the image
  • -f specifies the Docker file location and the build context, represented by the . (dot that stands for the current folder) in the command above. To be clear, the command COPY refers to the ProductCatalog folder

Then run the image using:

docker run --name productcatalogapp -p 8080:80 -it productcatalog -e ConnectionStrings:ProductCatalogDbPgSqlConnection="Host=localhost;Port=5432;Username=sqladmin;Password=Password1!;Database=product_catalog;Include Error Detail=true":

  • --name gives the name to the container
  • -p binds host and container ports. By default, ASP.NET application starts on http:80 port—that is even declared in the Dockerfile
  • -e sets an environment variable—the connection string in this case

NOTE: The docker run command starts your app, but it won’t work correctly unless you create a docker network between the ProductCatalog and the Postgresql containers. However, you can try to load the Swagger web page to see if the app is at least started. More info here.

Go to http://localhost:8080/index.html and if everything is working locally move forward to the next step: the infrastructure definition.

Choosing the right Azure resources

Now that the code is written and properly running in a local environment, it can be deployed in a cloud environment. As mentioned earlier, the public cloud we are going to use is Microsoft Azure.

Azure App Service is a PaaS service able to run Docker containers, that suits best for this scenario. Azure Container Registry holds the Docker images ready to be pulled from the App Service. Then, an Azure KeyVault instance can store application secrets such as connection strings.

Other important resources are the database server—Azure Database for Postgresql—and the Service Bus to allow asynchronous communication between the services.

Scripting infrastructure

To deploy Azure resources, no operation is required to be executed manually. Everything is written—and versioned—as a Terraform script, using declarative configuration files. The language used is HashiCorp Configuration Language (HCL), an agnostic-cloud language that allows you to work with different cloud providers using the same tool. No Azure Portal, CLI, ARM or Bicep files are used.

Getting started

Before working with Terraform, just a couple of notes:

  • install Terraform on the machine
  • install these Visual Studio Code extensions:

Terraform needs to store the status of the deployed resources in order to understand if a resource has been added, changed, or deleted. This state is saved in a file stored on the cloud (storage account for Azure, S3 for AWS, etc.). Since this is part of the Terraform configuration, it is not possible to do it through script but instead, it is the only one operation that must be done using other tools. The next sections explain how to set up the environment using az CLI to create the storage account and the IAM identity that actually runs the code.

NOTE: You can not use the same names I used because some of the resources require a unique name across the entire Azure cloud.

Create a resource group for the Terraform state

Every Azure resource must be in a resource group and it is a good practice to have a different resource group for each application/environment. In this case, I created a resource group to hold the Terraform state and another one to host all the productive resources.

To create the resource group, open the CLI (Cloud Shell, PowerShell, it is up to you) and type:

az group create --location <location> --name sshsstates

Create a storage account for the Terraform state

Storage account is a resource that holds Azure Storage objects such as blobs, file shares, queues, tables, and so on. In this cases it will hold a blob container with the Terraform state file.

Create one by running:

az storage account create \
  --name sshsstg01 \
  --access-tier Hot \
  --kind StorageV2
  --sku Standard_LRS
  --location <location>
  --resource-group sshsstates

where location is the location of the resource group created at the previous step.

Then, create the blob container in this storage account:

az storage container create \
  --name sshsstatedevops01
  --account-name sshsstg01
  --resource-group sshsstates

where sshsstg01 is the storage account name created at the previous step.

Create a service principal for the app

Terraform needs a service principal to create and destroy resources on Azure. However, the newly created app registration needs the Contributor access to read from Azure and the write access to create new resources. To keep things simple, give the Contributor role to the registered app at the subscription level running the following command:

az ad sp create-for-rbac --name sshsapp --role contributor --scope "/subscriptions/<subscriptionId>" --sdk-auth where:

  • subscriptionId is your Azure subscription

NOTE: At the time of writing, there seems to be some kind of issue running this command through az Powershell module. Running it in a Cloud Shell seems to work correctly.

Write down the json given back by the output, it will be useful later and it can’t be read anymore.

And that’s it, Azure is ready to host and persist the Terraform state.

Terraform script

Create a file main.tf in the terraform folder. This file could be split in three different sections:

  • the first one to declare where the state is hosted, the required versions, and others prerequisites
  • the second section contains the variables definition
  • the last and most important section is about the resources that will be deployed by the GitHub Actions
terraform {
  backend "azurerm" {
    resource_group_name  = "sshsstates"
    storage_account_name = "sshsstg01"
    container_name       = "sshsstatedevops01"
    key                  = "sshsstatedevops01.tfstate"
  }

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.0.0"
    }
  }
}

provider "azurerm" {
  features {}
}

data "azurerm_client_config" "current" {}

As you may already have noticed, the first line declares where the Terraform state file is located. If not found, it is created. Other sections are required to tell Terraform that it is working with Azure.

Variables definition

The second section is about the variables definition. They are useful in order to reuse some specific values in different places without repeating them.

The variable definition structure is quite simple:

variable "myVariable" {
  type        = string
  description = "This is a variable of type string"
}

In the next paragraph, resources and their variables are defined. Remember to put variables here above the resources.

Resource Group and policy group

Of course SSHS resources must be placed in the same resource group. First of all create a new one:

resource "azurerm_resource_group" "rg" {
  name     = "sshs"
  location = "<location>"
}

The first part in quotes declares the Azure resource type that it is referring to. The second part, rg, is an arbitrary name given to this Terraform resource, that it is possible to use later on to refer to this particular declaration. If it is not clear, have a look at the next step.

Azure KeyVault

Azure KeyVault is the Azure resource that must be deployed before anything else because other resources will generate some sensitive information that must be stored in the KeyVault, such as the connection string.

resource "azurerm_key_vault" "keyvault" {
  name                        = "sshskeyvault01"
  resource_group_name         = azurerm_resource_group.rg.name
  location                    = azurerm_resource_group.rg.location
  enabled_for_disk_encryption = false
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_retention_days  = 7
  purge_protection_enabled    = false
  sku_name                    = "standard"
}

The resource group name and the location are using the same data declared for the resource group, but as you see, no replication is needed. When the resource group is deployed, the name and the location are computed and they are available to other resources not yet created.

Azure Container Registry

Azure Container Registry is the Azure offer to manage container images. Images are then pulled from here and deployed to the Azure AppService.

resource "azurerm_container_registry" "acr" {
  name                = "sshsconrgs01"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  sku                 = "Basic"
  admin_enabled       = true
}

Azure AppService

Each AppService instance must run in an AppService Plan, which defines a set of compute resources for the web app to run, such as the OS, the location, and the pricing tier.

Define then a free AppService Plan:

resource "azurerm_service_plan" "appservice-plan" {
  name                = "sshsappsrvpln01"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  os_type             = "Linux"
  sku_name            = "F1"
}

And an AppService to run the microservice:

resource "azurerm_linux_web_app" "appsrv-prodcatalog" {
  name                = "sshsappsrvcat01"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  service_plan_id     = azurerm_service_plan.appservice-plan.id

  site_config {
  }

  app_settings = {
    "DOCKER_REGISTRY_SERVER_URL"          = var.docker_registry_url
    "DOCKER_REGISTRY_SERVER_USERNAME"     = var.docker_registry_username
    "DOCKER_REGISTRY_SERVER_PASSWORD"     = var.docker_registry_password
    "WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
  }

  identity {
    type = "SystemAssigned"
  }
}

Even here, the app_service_plan_id is set using the AppServicePlan Id that is generated at the deployment time.

The script must log in the ACR service and pull the Docker image from there. To set ACR credentials some of the default environment variables shall be set, but since they are sensitive information no value is hardcoded but instead passed safely through variables. So define the variables as well:

variable "docker_registry_password" {
  type        = string
  description = "Docker Registry password"
}
variable "docker_registry_url" {
  type        = string
  description = "Docker Registry URL"
}
variable "docker_registry_username" {
  type        = string
  description = "Docker Registry username"
}

Instead, identity is saying to enable the Azure Identity mechanism for this resource. Here, Azure Identity is used to authenticate in the Azure KeyVault to get access to secrets.

Azure KeyVault policies

The AppService instance now has an identity, but the KeyVault doesn’t have the policy yet to allow that identity to be authorized to get access to secrets. To create a new access policy, use the azurerm_key_vault_access_policy object:

resource "azurerm_key_vault_access_policy" "keyvault-sshsappsrvcat01-accesspolicy" {
  key_vault_id = azurerm_key_vault.keyvault.id
  tenant_id    = data.azurerm_client_config.current.tenant_id

  object_id = azurerm_linux_web_app.appsrv-prodcatalog.identity.0.principal_id

  secret_permissions = [
    "Get",
    "List",
  ]
}

Here’s the explanation:

  • key_vault_id is the KeyVault id
  • tenant_id is the Azure tenant that holds the SSHS resource group
  • object_id is the Azure resource id that must be granted access
  • secret_permissions are the permission given by this policy: since the PC code retrieves secrets only, follow the least privilege principle assigning a read only role

However, if you try the script now, you will notice that it fails. Why? Because the service principal that is running the Terraform script has not granted access to the KeyVault. So add this policy too:

resource "azurerm_key_vault_access_policy" "keyvault-sshsapp-accesspolicy" {
  key_vault_id = azurerm_key_vault.keyvault.id
  tenant_id    = data.azurerm_client_config.current.tenant_id

  object_id = data.azurerm_client_config.current.object_id

  secret_permissions = [
    "Get",
    "List",
    "Set",
    "Delete",
    "Recover",
    "Backup",
    "Restore",
    "Purge"
  ]
}

Azure Database for Postgresql

The last resource that needs to be declared is the Postgresql database.

First, add the connection string to the KeyVault that is passed as variable:

resource "azurerm_key_vault_secret" "secret_db_connectionstring" {
  name         = "ConnectionStrings--ProductCatalogDbPgSqlConnection"
  value        = var.db_connection_string
  key_vault_id = azurerm_key_vault.keyvault.id

  depends_on = [
    azurerm_key_vault_access_policy.keyvault-sshsapp-accesspolicy
  ]
}

The connection string is set through a variable that is passed from the GitHub Actions. Later, it is explained how to get and pass it. Even if the azurerm_postgresql_server object has a property to get the connection string generated, I prefer this approach since the SQL user could be changed at any time independently from the release pipelines. The depends_on node is to tell Terraform that this action must be performed after the creation of the keyvault-sshsapp-accesspolicy object.

Then, the definition of the Postgresql server and its database:

resource "azurerm_postgresql_server" "pg-server" {
  name                         = "sshsdbsrvprodcatalog01"
  resource_group_name          = azurerm_resource_group.rg.name
  location                     = azurerm_resource_group.rg.location
  administrator_login          = var.db_admin_username
  administrator_login_password = var.db_admin_password
  sku_name                     = "B_Gen5_1"
  version                      = "11"
  ssl_enforcement_enabled      = true
  auto_grow_enabled            = false
  storage_mb                   = 5120
}
resource "azurerm_postgresql_database" "pg-prodcatalog" {
  name                = "sshsdbprodcatalog01"
  resource_group_name = azurerm_resource_group.rg.name
  charset             = "UTF8"
  collation           = "English_United States.1252"
  server_name         = azurerm_postgresql_server.pg-server.name
}

To allow other Azure resources to connect to the database, open the firewall to other Azure resources:

resource "azurerm_postgresql_firewall_rule" "azure-firwall-rule" {
  name                = "AzureFirewallRule"
  resource_group_name = azurerm_resource_group.rg.name
  server_name         = azurerm_postgresql_server.pg-server.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

The value 0.0.0.0 means to accept connections coming from other Azure resources.

NOTE: This setting opens the firewall to any Azure resource, even those that are not part of your subscription or tenant. This is safe in a production environment.

Just like before, variables are used here in order to protect sensitive information:

variable "db_admin_username" {
  type        = string
  description = "SQL admin username"
}
variable "db_admin_password" {
  type        = string
  description = "SQL admin password"
}
variable "db_connection_string" {
  type        = string
  description = "SQL connection string"
}

Using Visual Studio Code, format the Terraform script by opening the command palette (F1) and selecting Format Document. This is an important step, the deploying pipeline fails if the file is not formatted properly.

Deploying SSHS

GitHub actions

The code is written, the application is working, and the infrastructure is declared on the file and ready to be pushed out. The last step to have the first microservice online is to define a pipeline to achieve the Continuous Integration and deployment of the service.

Under the.github\workflows folder, create a file named build-deploy.yml. This file is composed of different sections:

  • the action definition
  • environment variables declarations
    • the jobs to execute:
    • the infrastructure provisioning
    • the ProductCatalog service build and deployment

Get started with the definition:

name: Build and Deploy

on:
  push:
    branches: [main]

The first line sets the name of the action; others define the action trigger: in this case, the action starts every time a new commit is pushed to the main branch (this includes PRs merge commits).

Set the environment variables as follow:

env:

  ASPNETCORE_ENVIRONMENT: Production

  PROJECT_PRODUCT_CATALOG: src/ProductCatalog/ProductCatalog.csproj

  REGISTRY_NAME: 'sshsconrgs01.azurecr.io'

  DB_CONNECTION_STRING: Host=${{ secrets.DB_HOST }}.postgres.database.azure.com;Port=${{ secrets.DB_PORT }};Username=${{ secrets.DB_ADMIN_USERNAME }}@${{ secrets.DB_HOST }};Password=${{ secrets.DB_ADMIN_PASSWORD }};Database=${{ secrets.DB_NAME }};

  • ASPNETCORE_ENVIRONMENT: sets the environment for the ASP.NET application
  • PROJECT_PRODUCT_CATALOG: sets the path to the ProductCatalog csproj file
  • REGISTRY_NAME: sets the ACR name to query
  • DB_CONNECTION_STRING: sets the database connection string that will be eventually saved in Azure KeyVault by Terraform and used by the application

Other variables will be added later on, but this is enough for now.

Path filters

Before moving towards the deployment jobs (infrastructure and Pc service) I would like to introduce an optimization.

In fact, building and deploying the entire infrastructure and all the services is quite expensive in terms of time. Why do I need to deploy all the services even if they haven’t been touched in the last commit? Of course it is worth the same for the cloud infrastructure. If the files are the same, there is no need to trigger the Terraform script.

To achieve this optimization, introduce a job with the aim to understand which paths are changed and trigger the right jobs:

jobs:
  path-filters:
      runs-on: ubuntu-latest
      outputs:
        terraformPath: ${{ steps.filter.outputs.terraform }}
        productCatPath: ${{ steps.filter.outputs.prodcat }}

      steps:
        - uses: actions/checkout@v2
        - uses: dorny/paths-filter@v2
          id: filter
          with:
            filters: |
              terraform:
                - 'terraform/**'
              prodcat:
                - 'src/ProductCatalog/**'

        - name: terraform tests
          if: steps.filter.outputs.terraform == 'true'
          run: echo "Terraform path is triggered"

        - name: prod cat tests
          if: steps.filter.outputs.prodcat == 'true'
          run: echo "ProductCatalog path is triggered"

The next job sets dependencies to this job, through the keywords needs, and eventually, if.

Infrastructure deployment

The next job to define is about the Terraform script. This job actually runs some Terraform commands:

  • fmt to check for some formatting errors. If it raises some error, format your file in Visual Studio Code opening the command palette and selecting Format document and recommit
  • init: initialize the working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times
  • validate: validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.
  • plan: creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure
  • apply: command executes the actions proposed in a Terraform plan

But before all of that, Terraform requires some environment variables in order to login through az CLI:

  • ARM_CLIENT_ID
  • ARM_CLIENT_SECRET
  • ARM_SUBSCRIPTION_ID
  • ARM_TENANT_ID

At some of the commands above, variable values must be passed using the syntax TF_VAR_<varname>. Unfortunately, these must be passed at each step they are used. Values for these variables have been first saved as GitHub Secrets as explained later.

infrastructure:
    env:
      ARM_CLIENT_ID: ${{ secrets.AZURE_AD_CLIENT_ID }}
      ARM_CLIENT_SECRET: ${{ secrets.AZURE_AD_CLIENT_SECRET}}
      ARM_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      ARM_TENANT_ID: ${{ secrets.AZURE_AD_TENANT_ID }}
    runs-on: ubuntu-latest
    needs: path-filters
    if: needs.path-filters.outputs.terraformPath == 'true'
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1.4.0

      - name: Terraform Format
        id: fmt
        working-directory: "./terraform"
        run: terraform fmt -check

      - name: Terraform Init
        id: init
        working-directory: "./terraform"
        run: terraform init

      - name: Terraform Validate
        id: validate
        working-directory: "./terraform"
        run: terraform validate

      - name: Terraform Plan
        id: plan
        env:
          TF_VAR_docker_registry_url: ${{ env.REGISTRY_NAME }}
          TF_VAR_docker_registry_username: ${{ secrets.AZURE_AD_CLIENT_ID }}
          TF_VAR_docker_registry_password: ${{ secrets.AZURE_AD_PASSWORD }}
          TF_VAR_db_admin_username: ${{ secrets.DB_ADMIN_USERNAME }}
          TF_VAR_db_admin_password: ${{ secrets.DB_ADMIN_PASSWORD }}
          TF_VAR_db_connection_string: ${{ env.DB_CONNECTION_STRING }}
        working-directory: "./terraform"
        run: terraform plan
        continue-on-error: true

      - name: Terraform Plan Status
        if: steps.plan.outcome == 'failure'
        run: exit 1

      - name: Terraform Apply
        id: apply
        env:
          TF_VAR_docker_registry_url: ${{ env.REGISTRY_NAME }}
          TF_VAR_docker_registry_username: ${{ secrets.AZURE_AD_CLIENT_ID }}
          TF_VAR_docker_registry_password: ${{ secrets.AZURE_AD_PASSWORD }}
          TF_VAR_db_admin_username: ${{ secrets.DB_ADMIN_USERNAME }}
          TF_VAR_db_admin_password: ${{ secrets.DB_ADMIN_PASSWORD }}
          TF_VAR_db_connection_string: ${{ env.DB_CONNECTION_STRING }}
        working-directory: "./terraform"
        run: terraform apply -auto-approve

The Terraform Plan Status is used to print an output in case the plan command fails.

ProductCatalog deployment

The last job is the microservice deployment. This job logs in Azure use both az cli and the container registry. Then, it builds the Docker image using the Dockerfile and pushes the built image to ACR. From here, it is then pushed to the Azure AppService instance. It then logs out from az cli to dispose resources.

prodcat:
    runs-on: ubuntu-latest
    needs: infrastructure
    if: needs.path-filters.outputs.productCatPath == 'true'
    steps:
      - name: "Checkout GitHub Action"
        uses: actions/checkout@main

      - name: "Az CLI login"
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }} 

      - name: "Login into Azure Container Registry"
        uses: azure/docker-login@v1
        with:
          login-server: ${{ env.REGISTRY_NAME }}
          username: ${{ secrets.AZURE_AD_CLIENT_ID }}
          password: ${{ secrets.AZURE_AD_PASSWORD }}

      - name: "Pushing docker image to ACR"
        run: |
          docker build -t ${{ env.REGISTRY_NAME }}/productcatalog:${{ github.sha }} -f src/ProductCatalog/Docker/Dockerfile src/ProductCatalog
          docker push ${{ env.REGISTRY_NAME }}/productcatalog:${{ github.sha }}

      - name: "Push image to web app"
        uses: azure/webapps-deploy@v2
        with:
          app-name: "sshsappsrvcat01"
          images: "${{ env.REGISTRY_NAME }}/productcatalog:${{ github.sha }}"
      
      - name: Azure logout
        run: |
          az logout

NOTE: This script should contain the migration applying command as well, but for some reason I am not able to make it work like I do in other projects. I did several attempts trying to use different strategies such as (bundles, scripts, and so on, but it still ignores the IDbContextFactory class—ignored in this post because of this problem. However, it seems I am not the only one that is facing (this issue on Net6); others are saying is caused by the new Program.cs template, so they suggest to use (the new minimal hosting model). Then, the solution is to apply migrations by code—a thing I don’t like—or by the local command line using the remote database connection string. If you know how to work around this issue that made me lose so many nights, I would be grateful .

GitHub Secrets

Now, pipelines are set up, but no value secret has been set yet. Go in GitHub repo web page, navigate to Settings, and then to Secrets. Take up again what you get from the az rbac commands discussed at Create a service principal for the app paragraph and add:

  • AZURE_AD_CLIENT_ID with the appId value
  • AZURE_AD_CLIENT_SECRET with the clientSecret value
  • AZURE_AD_PASSWORD with the clientSecret value
  • AZURE_AD_TENANT_ID with the tenant value
  • AZURE_CREDENTIALS with the json copied before:
  • {
  •   "clientId": "<GUID>",
  •   "clientSecret": "<GUID>",
  •   "subscriptionId": "<GUID>",
  •   "tenantId": "<GUID>",
  •   (...)
  • }
  • AZURE_SUBSCRIPTION_ID with the subscriptionId value
  • DB_ADMIN_USERNAME with your SQL Username (don’t use admin, otherwise it gets an error PostgreSQL AD Administrator login can not be "admin")
  • DB_ADMIN_PASSWORD with your SQL user password
  • DB_HOST with your database host name
  • DB_NAME with your database name
  • DB_PORT with your database port

Deploy and test ProductCatalog service

Push the main branch to GitHub and navigate to the Action section to monitor the pipeline logs. When finished, you should be able to navigate to https://sshsappsrvcat01.azurewebsites.net at the following paths:

  • health/ping
  • health/dbcontext
  • swagger

Notifications service

The goal of this project is to send a notification (an email) when a new product is added to the platform. What we need to do then, is to update the Pc project to send messages over a queue, and then consume them in Notifications.

Create the project

Create a project named "Notifications" as sibling of the Pc project. You can repeat exactly the same steps we previously did, configuring the Program and Startup classes. Add UserSecrets as well because we will need to store the Service Bus connection string there.

Install the following NuGet packages:

  • Azure.Extensions.AspNetCore.Configuration.Secrets to add KeyVault support
  • Azure.Identity to add KeyVault support
  • Azure.Messaging.ServiceBus to use the Service Bus
  • Microsoft.Extensions.Azure to use the Service Bus

Then generates the user secrets to hold the service bus connection string.

Service Bus

The Notifications service has to listen to the Service Bus queue and trigger an event when a new message is received. When the message is properly executed, it can be removed from the queue.

The message sent by Pc and received by Nts is an IntegrationEvent. The contract is the following, that must be defined in both services since there is no DRY strategy adopted in the solution:

namespace Notifications.Events
{
    public class ProductCreatedIntegrationEvent
        : IntegrationEvent
    {
        [JsonPropertyName("id")]
        public Guid Id { get; }

        [JsonPropertyName("name")]
        public string Name { get; }

        [JsonPropertyName("price")]
        public decimal Price { get; }

        [JsonPropertyName("owner")]
        public string Owner { get; }

        [JsonConstructor]
        public ProductCreatedIntegrationEvent(
            Guid id,
            string name,
            decimal price,
            string owner)
        {
            Id = id;
            Name = name;
            Price = price;
            Owner = owner;
        }
    }

    public class IntegrationEvent
    {
        public IntegrationEvent()
        {
            MessageId = Guid.NewGuid();
        }

        [JsonPropertyName("messageId")]
        public Guid MessageId { get; }
    }
}
Let’s configure the Startup class to configure the service bus:
public void ConfigureServices(IServiceCollection services)
{
    services.AddAzureClients(builder =>
    {
        var connectionString = Configuration.GetConnectionString("ServiceBus");
        builder.AddServiceBusClient(connectionString);
    });
}

Create an interface and its implementation to handle the service bus queue handler:

public interface IServiceBusListener
{
    Task RegisterAsync();
}

public class ServiceBusListener : IServiceBusListener
{
    private readonly ServiceBusProcessor _serviceBusSender;
    private readonly ILogger<ServiceBusListener> _logger;

    public ServiceBusListener(
        ServiceBusClient serviceBusClient,
        IConfiguration configuration,
        ILogger<ServiceBusListener> logger)
    {
        var serviceBusProcessorOptions = new ServiceBusProcessorOptions
        {
            MaxConcurrentCalls = 1,
            AutoCompleteMessages = false,
        };

        _serviceBusSender = serviceBusClient.CreateProcessor(
            configuration["QueueName"],
            serviceBusProcessorOptions);

        _logger = logger;
    }

    public async Task RegisterAsync()
    {
        _serviceBusSender.ProcessMessageAsync += ProcessMessagesAsync;
        _serviceBusSender.ProcessErrorAsync += ProcessErrorAsync;

        await _serviceBusSender.StartProcessingAsync();
    }

    private Task ProcessErrorAsync(ProcessErrorEventArgs arg)
    {
        _logger.LogError(arg.Exception, "Message handler encountered an exception");
        return Task.CompletedTask;
    }

    private async Task ProcessMessagesAsync(ProcessMessageEventArgs args)
    {
        ProductCreatedIntegrationEvent myPayload = args.Message.Body.ToObjectFromJson<ProductCreatedIntegrationEvent>();

        if (!EmailHelper.IsEmail(myPayload.Owner))
        {
            await args.CompleteMessageAsync(args.Message);
            return;
        }

        try
        {
            // send the email as you prefer
            await args.CompleteMessageAsync(args.Message);
        }
        catch (Exception ex)
        {
            _logger.LogCritical(ex, "Error sending notification");
        }
    }
}

public static class EmailHelper
{
    private const string EMAIL_PATTERN = @"\A(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?)\Z";

    public static bool IsEmail(string content)
    {
        return Regex.IsMatch(content, EMAIL_PATTERN, RegexOptions.IgnoreCase);
    }
}

Now, update the Startup class to register the listener and register its handler:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<IServiceBusListener, ServiceBusListener>();
}

public void Configure(IApplicationBuilder app)
{
    var bus = app.ApplicationServices.GetService<IServiceBusListener>();
    bus.RegisterAsync();
}

Update ProductCatalog service to send messages on queue

Install Azure.Messaging.ServiceBus and Microsoft.Extensions.Azure NuGet packages, then update the Startup class to configure the service bus:

services.AddAzureClients(builder =>
{
    var connectionString = Configuration.GetConnectionString("ServiceBus");
    builder.AddServiceBusClient(connectionString);
});

Go into the ProductService and update the CreateProductAsync method:


public ProductService(
    IServiceBusService serviceBusSender,
    ProductCatalogDbContext dbContext,
    ILogger<ProductService> logger)
{
    _serviceBusSender = serviceBusSender;
    _dbContext = dbContext;
    _logger = logger;
}

public async Task<Guid> CreateProductAsync(CreateProductRequest request)
{
    var product = new Product(
        request.Name,
        request.Price,
        request.Owner);

    _dbContext.Products.Add(product);

    await _dbContext.SaveChangesAsync();

    // -- start new code --
    var ie = new ProductCreatedIntegrationEvent(
        product.Id,
        product.Name,
        product.Price,
        product.Owner);

    try
    {
        await _serviceBusSender.SendEventAsync<ProductCreatedIntegrationEvent>(ie);
    }
    catch (Exception ex)
    {
        _logger.LogWarning(ex, "Failed to publish ie of type {ie}", nameof(ProductCreatedIntegrationEvent));
    }

    // -- end new code --

    return product.Id;
}

Then, create the contract and the implementation to send events on the bus:

public interface IServiceBusService
{
    Task SendEventAsync<T>(T integrationEvent)
        where T : IntegrationEvent;
}

public class ServiceBusService : IServiceBusService
{
    private readonly ServiceBusSender _serviceBusSender;

    public ServiceBusService(
        ServiceBusClient serviceBusClient,
        IConfiguration configuration)
    {
        _serviceBusSender = serviceBusClient.CreateSender(configuration["QueueName"]);
    }

    public async Task SendEventAsync<T>(T integrationEvent)
        where T : IntegrationEvent
    {
        var options = new JsonSerializerOptions
        {
            WriteIndented = true
        };

        string messagePayload = JsonSerializer.Serialize(integrationEvent, options);
        var message = new ServiceBusMessage(messagePayload);

        await _serviceBusSender.SendMessageAsync(message);
    }
}

Docker

As we did for the ProductCatalog project, create the Dockerfile for the Notifications service. Don’t forget to replicate the .dockerignore file as well:

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS base
WORKDIR /app
COPY . .

RUN dotnet restore \
    Notifications.csproj \

RUN dotnet publish \
    --configuration Release \
    --self-contained false \
    --runtime linux-x64 \
    --output /app/publish \
    Notifications.csproj \

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as final
WORKDIR /app
COPY --from=base /app/publish .
EXPOSE 80
ENTRYPOINT ["dotnet", "Notifications.dll"]

Updating scripts with Notifications service resources

Update both Terraform and GitHub Actions files to add support for the Notifications service. Everything can be copied from the Pc definition except for the service bus object in the Terraform file, that looks like:

resource "azurerm_servicebus_namespace" "service-bus-namespace" {
  name                = "sshssrvbusnmps01"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  sku                 = "Basic"
}

resource "azurerm_servicebus_queue" "service-bus-queue" {
  name         = "sshssrvbusqueu01"
  namespace_id = azurerm_servicebus_namespace.service-bus-namespace.id
}

Each Service Bus resource must be a part of a namespace, that works in a way quite similar to the AppService Plan. Pretty easy, isn’t it?

Push everything to the cloud and the job is done!

Conclusions

In this post, I explained how to build and deploy a cloud-native application. Even if it only executes some CRUD operation other than sending a message on a queue, a lot of real scenario-like issues have been faced. For example, how user secrets are handled in development and production environments. Another important aspect is the IaC approach to declare cloud resources provisioning, using one of the most used tools of all—Terraform. GitHub Actions represents the CI and CD pipelines and any organization uses them independently by the service used (DevOps, GitLab, and so on). Finally, Swagger and Docker are other important tools used every day by millions of developers all around the world.


 

About the Author

Rate this Article

Adoption
Style

BT