BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Automate Deployment & Management of Docker Cloud/Virtual Java Microservices with DCHQ

Automate Deployment & Management of Docker Cloud/Virtual Java Microservices with DCHQ

Bookmarks

 

This article demonstrates a solution for automating the deployment and management of a Docker Java microservices application on any cloud or virtualization platform. We do so by extending an existing project, Chris Richardson’s demo of a microservices-based money-transfer program with event sourcing, CQRS and Docker, by introducing automated build and deploy to that project. Our project, which includes Dockerfiles for each of the microservices, will also provide a consolidated front-end that uses all of the microservices and can run on any web server. We will use Nginx web server, on which we’ll add the front-end JavaScript code in the default directory /usr/share/nginx/html/. Our front-end will expose the following capabilities:

  • Create a new account using an initial balance.
  • Query an account to get the remaining balance.
  • Transfer money from one account to another.

The money transfer application we will create will serve as an example of building and deploying microservices consisting of event sourcing, CQRS and Docker. That microservices-based application is architected to be highly scalable and highly available, using polyglot persistence, event sourcing (ES) and command query responsibility segregation (CQRS). Microservices applications consist of loosely coupled components that communicate using events. Those components can be deployed either as independent services, or packaged as a monolithic application for simplified development and testing. In this project we focus on automating the former approach -- i.e. deploying this application using separate services running on Docker containers.

Our goal will be to run and manage the Event Sourcing Docker Java Microservices application template in this project on 13 different clouds and virtualization platforms (including vSphere, OpenStack, AWS, Rackspace, Microsoft Azure, Google Compute Engine, DigitalOcean, IBM SoftLayer, etc.) We recommend you follow along by doing one of the following:

Background

Containerizing enterprise Java applications is a challenge largely because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. Moreover, the ephemeral design of containers compels developers to spin up new containers and re-create the complex dependencies and external integrations with every version update.

DCHQ, available in hosted and on-premise versions, addresses these challenges and simplifies the containerization of enterprise Java applications through an advanced application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plugins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.

Once an application is provisioned, a user can:

  • monitor the CPU, Memory, and I/O of the running containers,
  • get notifications and alerts,
  • get access to application backups, automatic scale in/out workflows, and plugin execution workflows to update running containers.

Moreover, out-of-the-box workflows that facilitate continuous delivery with Jenkins (more build-server support coming soon) allow developers to refresh the Java WAR file of a running application without disrupting the existing dependencies and integrations.

In this author’s independent blogs, we’ve demonstrated the end-to-end deployment automation of more traditional, or typical brownfield Java applications (like Names Directory, Pizza Shop and Movie Store apps) on multi-tier Docker-based application stacks across 13 different clouds and virtualization platforms.

In our current project we will focus on a microservices architecture that requires no application servers whatsoever. Each microservice runs on an extremely lightweight Java container. A consolidated front-end was built to make REST API calls to each of the connected microservices in order to execute a specific task (e.g. create an account, query an account or transfer money from one account to another). One of the main advantages of microservices (when compared to a typical monolithic application) is that these modular services can be easily replaced and extended without requiring changes to the other microservices. In a way, this eliminates single points of failure and makes it easier for developers to contribute to the overall project.

In this project we will provide a step-by-step guide for deploying and managing this Java application on different cloud/virtual infrastructure.

We will need to perform each of the following steps, which we will see in detail:

  • Obtain credentials for the Event Store
  • Apply a patch and build the JAR files
  • Automate the building of Docker images from Dockerfiles in this project using DCHQ
  • Build the YAML-based application templates that can be reused on any Linux host running anywhere
  • Provisioning and auto-scaling the underlying infrastructure on any cloud (with Rackspace being the example in this blog)
  • Deploy the multi-tier Java application on the Rackspace cluster
  • Monitor the CPU, Memory and I/O of the running containers
  • Enable the continuous delivery workflow with Jenkins to update the JAR file of the running microservices when a build is triggered

We will now cover each of those steps in detail:

Obtain credentials for the Event Store

In order to run the microservices separately, you need to get credentials for the Event Store.

Copy and paste the obtained values for EVENTUATE_API_KEY_ID and EVENTUATE_API_KEY_SECRET in the Event Sourcing Docker Java Microservices Application Template.

Apply a patch and build the JAR files

The JAR files used in the Docker images were built from this project.

All of the JAR files were built on December 27th, 2015 and embedded in the Docker images here.

Before building the JAR files, copy CORSFilter.java in the "event-sourcing-examples/java-spring/common-web/src/main/java/net/chrisrichardson/eventstore/javaexamples/banking/web/util" directory. You can then execute

./gradlew assemble.
git clone https://github.com/cer/event-sourcing-examples.git

wget https://github.com/dchqinc/event-sourcing-microservices/raw/master/patch/CORSFilter.java -O /event-sourcing-examples/java-spring/common-web/src/main/java/net/chrisrichardson/eventstore/javaexamples/banking/web/util/CORSFilter.java

cd /event-sourcing-examples/java-spring

./gradlew assemble

Automate the building of Docker images from Dockerfiles in this project using DCHQ

All of the images in this project have already been built and pushed to the DCHQ public Docker Hub repository for your reference. Here are the custom images that will be used in the application template:

  • dchq/nginx-microservices:latest
  • dchq/accounts-command-side-service
  • dchq/transactions-command-side-service
  • dchq/transactions-command-side-service-

To build the images and push them into your own Docker Hub or Quay repository, you can use DCHQ. Here are the four GitHub projects used for these images:

Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), you can navigate to Automate > Image Build and then click on the + button to create a new Dockerfile (Git/GitHub/BitBucket) image build.

Provide the required values as follows:

  • Git URL
  • Git Branch – this field is optional -- but you can specify a branch from a GitHub project. The default branch is master.
  • Git Credentials – you can store the credentials to a private GitHub repository securely in DCHQ by navigating to Manage > Cloud Providers & Repos and clicking on the + to select Credentials
  • Cluster – the building of Docker images is orchestrated through the DCHQ agent. As a result, you need to select a cluster on which an agent will be used to execute the building of Docker images. If a cluster has not been created yet, please refer to this this section to either register a running host or automate the provisioning of new virtual infrastructure.
  • Push to Registry – push the newly created image to either a public or private repository on Docker Hub or Quay. To register a Docker Hub or Quay account, navigate to Manage > Cloud Providers & Repos, and click on the + to select Docker Registries
  • Repository – this is the name of the repository on which the image will be pushed. For example, our image was pushed to dchq/php-example:latest
  • Tag – this is the tag name that you would like to give for the new image. The supported tag names in DCHQ include:
    • {{date}} -- formatted date
    • {{timestamp}} -- the full time-stamp
  • Cron Expression – schedule the building of Docker images using out-of-box cron expressions. This facilitates daily and nightly builds for users.

Once the required fields are completed, click Save.

You can then click on the Play Button to build the Docker image on-demand.

Build the YAML-based application templates that can reused on any Linux host running anywhere

Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Manage >App/Machine and then click on the + button to create a new Docker Compose template. You can refer to the detailed documentation for creating Docker Compose application templates here.

We have created an application template using the Docker images we built in the previous step. The template includes the following components:

  • Nginx -- for hosting the consolidated front-end for this microservices application
  • Account Creation, Account Query and Balance Transfer Microservices -- these services were built from the original project. A patch was applied by copying CORSFilter.java in the "event-sourcing-examples/java-spring/common-web/src/main/java/net/chrisrichardson/eventstore/javaexamples/banking/web/util" directory.
  • Mongo -- for the databases

Plugins to Configure Web Server at Request Time and Post-Provision

In the application template, you will notice that the Nginx container is invoking a BASH script plugin at request time in order to configure the container. This plugin can be executed post-provision as well.

These plugins can be created by navigating to Manage > Plugins. Once the BASH script is provided, the DCHQ agent will execute this script inside the container. You can specify arguments that can be overridden at request time and post-provision. Anything preceded by the $ sign is considered an argument -- for example, $file_url can be an argument that allows developers to specify the download URL for a WAR file. This can be overridden at request time and post-provision when a user wants to refresh the Java WAR file on a running container.

The plugin ID needs to be provided when defining the YAML-based application template. For example, to invoke a BASH script plugin for Nginx, we would reference the plugin ID as follows:

nginx:
  image: dchq/nginx-microservices:latest
  publish_all: true
  mem_min: 50m
  host: host1
  plugins:
    - !plugin
      id: Gl5Hi
      restart: true
      lifecycle: on_create
      arguments:
        - ACCOUNT_CMD_IP={{accountscommandside | ip}}
        - ACCOUNT_CMD_PORT={{accountscommandside | port_8080}}
        - ACCOUNT_TRANSFER_IP={{transactionscommandside | ip}}
        - ACCOUNT_TRANSFER_PORT={{transactionscommandside | port_8080}}
        - ACCOUNT_QUERY_IP={{accountsqueryside | ip}}
        - ACCOUNT_QUERY_PORT={{accountsqueryside | port_8080}}

In this example, Nginx is invoking a BASH script plugin that injects the microservices containers IP’s and port numbers in the /usr/share/nginx/html/js/app.js file dynamically (or at request time). The plugin ID is Gl5Hi.

Service Discovery with plugin lifecycle stages

The lifecycle parameter in plugins allows you to specify the exact stage or event to execute the plugin. If lifecycle is not specified, then by default, the plugin will be executed on_create. You can refer to the detailed documentation for setting up Docker service discovery here. Here are the supported lifecycle stages:

  • on_create -- executes the plugin when creating the container
  • on_start -- executes the plugin after a container starts
  • on_stop -- executes the plugin before a container stops
  • on_destroy -- executes the plugin before destroying a container
  • post_create -- executes the plugin after the container is created and running
  • post_start[:Node] -- executes the plugin after another container starts
  • post_stop[:Node] -- executes the plugin after another container stops
  • post_destroy[:Node] -- executes the plugin after another container is destroyed
  • post_scale_out[:Node] -- executes the plugin after another cluster of containers is scaled out
  • post_scale_in[:Node] -- executes the plugin after another cluster of containers is scaled in

cluster_size and host parameters for HA deployment across multiple hosts

You will notice that the cluster_size parameter allows you to specify the number of containers to launch (with the same application dependencies).

The host parameter allows you to specify the host you would like to use for container deployments. This is possible if you have selected Weave as the networking layer when creating your clusters, ensuring high-availability for your application server clusters across different hosts (or regions), while allowing you to comply with affinity rules to ensure for example that the database runs on a separate host. Here are the values supported for the host parameter:

  • host1, host2, host3, etc. – selects a host randomly within a data-center (or cluster) for container deployments
  • IP Address 1, IP Address 2, etc. -- allows a user to specify the actual IP addresses to use for container deployments
  • Hostname 1, Hostname 2, etc. -- allows a user to specify the actual hostnames to use for container deployments
  • Wildcards (e.g. “db-”, or “app-srv-”) – to specify the wildcards to use within a hostname

Environment Variable Bindings Across Images

Additionally, a user can create cross-image environment variable bindings by making a reference to another image’s environment variable. In this case, we have made several bindings – including ACCOUNT_CMD_IP={{accountscommandside | ip}} – in which the Account Creation microservice container IP is resolved dynamically at request time and is used to ensure that Nginx can establish a connection with this microservice.

Here is a list of the supported environment variable values:

  • {{alphanumeric | 8}} – creates a random 8-character alphanumeric string. This is most useful for creating random passwords.
  • {{Image Name | ip}} – allows you to enter the host IP address of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database.
  • {{Image Name | container_ip}} – allows you to enter the name of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
  • {{Image Name | container_private_ip}} – allows you to enter the internal IP of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
  • {{Image Name | port_Port Number}} – allows you to enter the Port number of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database. In this case, the port number specified needs to be the internal port number – i.e. not the external port that is allocated to the container. For example, {{PostgreSQL | port_5432}} will be translated to the actual external port that will allow the middleware tier to establish a connection with the database.
  • {{Image Name | Environment Variable Name}} – allows you to enter the value an image’s environment variable into another image’s environment variable. The use cases here are endless – as most multi-tier applications will have cross-image dependencies.

Event Sourcing Docker Java Microservices

nginx:
  image: dchq/nginx-microservices:latest
  publish_all: true
  mem_min: 50m
  host: host1
  plugins:
    - !plugin
      id: Gl5Hi
      restart: true
      lifecycle: on_create
      arguments:
        - ACCOUNT_CMD_IP={{accountscommandside | ip}}
        - ACCOUNT_CMD_PORT={{accountscommandside | port_8080}}
        - ACCOUNT_TRANSFER_IP={{transactionscommandside | ip}}
        - ACCOUNT_TRANSFER_PORT={{transactionscommandside | port_8080}}
        - ACCOUNT_QUERY_IP={{accountsqueryside | ip}}
        - ACCOUNT_QUERY_PORT={{accountsqueryside | port_8080}}

accountscommandside:
  image: dchq/accounts-command-side-service
  mem_min: 300m
  cluster_size: 1
  host: host1
  publish_all: true
  environment:
    - EVENTUATE_API_KEY_ID=<paste-your-key-here>
    - EVENTUATE_API_KEY_SECRET=<paste-your-key-here>

transactionscommandside:
  image: dchq/transactions-command-side-service
  mem_min: 300m
  cluster_size: 1
  host: host1
  publish_all: true
  environment:
    - EVENTUATE_API_KEY_ID=<paste-your-key-here>
    - EVENTUATE_API_KEY_SECRET=<paste-your-key-here>

accountsqueryside:
  image: dchq/accounts-query-side-service
  mem_min: 300m
  cluster_size: 1
  host: host1
  publish_all: true
  environment:
    - EVENTUATE_API_KEY_ID=<paste-your-key-here>
    - EVENTUATE_API_KEY_SECRET=<paste-your-key-here>
    - SPRING_DATA_MONGODB_URI=mongodb://{{mongodb | container_private_ip}}/mydb

mongodb:
  image: mongo:3.0.4
  host: host1

Provisioning and Auto-Scaling the Underlying Infrastructure on Any Cloud

Once an application is saved, you can register a cloud provider to automate the provisioning and auto-scaling of clusters on 12 different cloud end-points, including VMware vSphere, OpenStack, CloudStack, Amazon Web Services, Rackspace, Microsoft Azure, DigitalOcean, IBM SoftLayer, Google Compute Engine, and others.

To register a cloud provider for Rackspace (for example), navigate to Manage > Cloud Providers and Repos and click on the + button to select Rackspace. The Rackspace API Key needs to be provided – which can be retrieved from the Account Settings section of the Rackspace Cloud Control Panel.

You can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster. The Auto-Scale Policy for example, may set the maximum number of VM’s (or Cloud Servers) to 10.

You can now provision a number of Cloud Servers on the newly created cluster, either through the UI-based workflow or by defining a simple YAML-based Machine Compose template that can be requested from the Self-Service Library.

UI-based Workflow – You can request Rackspace Cloud Servers by navigating to Manage > Machines and then clicking on the + button to select Rackspace. Once the Cloud Provider is selected, select the region, size and image needed. Ports are opened by default on Rackspace Cloud Servers to accommodate some of the port requirements (e.g. 32000-59000 for Docker, 6783 for Weave, and 5672 for RabbitMQ). A Cluster is then selected and the number of Cloud Servers can be specified.

YAML-based Machine Compose Template – You can first create a Machine Compose template for Rackspace by navigating to Manage > App/Machine and then selecting Machine Compose.

Here’s the template for requesting a 4GB Cloud Server.

Medium:
  region: IAD
  description: Rackspace small instance
  instanceType: general1-4
  image: IAD/5ed162cc-b4eb-4371-b24a-a0ae73376c73
  count: 1

The supported parameters for the Machine Compose template are summarized below:

  • description: Description of the blueprint/template
  • instanceType: Cloud provider specific value (e.g. general1-4)
  • region: Cloud provider specific value (e.g. IAD)
  • image: Mandatory - fully qualified image ID/name (e.g. IAD/5ed162cc-b4eb-4371-b24a-a0ae73376c73 or vSphere VM Template name)
  • username: Optional - only for vSphere VM Template username
  • password: Optional - only for vSphere VM Template encrypted password. You can encrypt the password using the endpoint
  • network: Optional – Cloud provider specific value (e.g. default)
  • securityGroup: Cloud provider specific value (e.g. dchq-security-group)
  • keyPair: Cloud provider specific value (e.g. private key)
  • openPorts: Optional - comma separated port values
  • count: Total no of VM's, defaults to 1.

Once the Machine Compose template is saved you can request this machine from the Self-Service Library. You can click Customize and then select the Cloud Provider and Cluster for provisioning these Rackspace Cloud Servers.

Deploying the Multi-Tier Java Application on the Rackspace Cluster

Once the Cloud Servers are provisioned, you can deploy a multi-tier, Docker-based Java application on the new Cloud Servers. This can be done by navigating to the Self-Service Library and then clicking on Customize to request a multi-tier application.

Select an Environment Tag (like DEV or QE) and the Rackspace Cluster you created, then click Run.

Accessing the In-Browser Terminal For The Running Containers

A command prompt icon should be available next to the containers’ names on the Live Apps page. This allows users to enter the container using a secure communication protocol through the agent message queue. A white list of commands can be defined by the Tenant Admin to ensure that users do not make any harmful changes on the running containers.

For the Nginx container for example, we used the command prompt to make sure that the app.js file contains the proper IP’s and ports for the Docker Java micro-services.

In this screenshot, the in-browser terminal was used to display the contents of /usr/share/nginx/html/js/app.js in the Nginx container. We can see that the IP’s and ports for the Docker Java micro-services were properly injected into this file using DCHQ’s plug-in framework.

Monitoring the CPU, Memory and I/O Utilization of the Running Containers

Once the application is up and running, our developers monitor the CPU, Memory, and I/O of the running containers to get alerts when these metrics exceed a pre-defined threshold. This is especially useful when performing functional and load testing.

You can perform historical monitoring analysis and then correlate issues to container updates or build deployments. This can be done by clicking on Stats, and selecting a custom date range to view CPU, Memory and I/O historically.

Enabling Continuous Delivery By Replacing the Containers or Updating the JAR File of the Running Application when a Build is Triggered by Jenkins

The “immutable” containers model is a common best practice, which is done by rebuilding Docker images containing the application code and spinning up new containers with every application update. DCHQ provides an automated build feature that allows developers to automatically create Docker images from Dockerfiles or private GitHub projects containing Dockerfiles. These images are then pushed to one of the registered private or public repositories on a Docker Private Registry, Docker Hub or Quay.

You can automatically “replace” running containers with new containers launched from the latest image pushed in a Docker registry. This can be done on-demand or automatically when a new image is detected in a Docker registry. To replace a Docker Java microservice container with a new one containing the latest JAR file, then a user can simply click on the Actions menu and select Replace. A user can then enter the image name from which a new container will be launched to replace the already running container with the same application dependencies. Alternatively, a user can specify a trigger for this container replacement -- which can be based on a simple CRON expression (i.e. pre-defined schedule) or based on the latest image push on a Docker registry.

Many developers may wish to update the running containers with the latest Java JAR file instead. For that, DCHQ allows developers to enable a continuous delivery workflow with Jenkins. This can be done by clicking on the Actions menu of the running application and then selecting Continuous Delivery. You can select a Jenkins instance that has already been registered with DCHQ, the actual job on Jenkins that will produce the latest JAR file, and then a BASH script plugin to grab this build and deploy it on a running application server. Once this policy is saved, DCHQ will grab the latest WAR file from Jenkins any time a build is triggered and deploy it on the running application server.

As a result, developers will always have the latest JAR deployed on their running containers in DEV/TEST environments.

Conclusion

Containerizing enterprise Java applications is a challenge mainly because existing application composition frameworks do not address complex dependencies, service discovery or auto-scaling workflows post-provision.

DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise Java applications through an advanced application composition framework that facilitates cross-image environment variable bindings, extensible BASH script plugins that can be invoked at different life-cycle stages of the application deployment, and application clustering for high availability across multiple hosts or regions with support for auto scaling.

Sign up for free on http://DCHQ.io or download DCHQ On-Premise to get access to out-of-box multi-tier Java application templates along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.

About the Author

Amjad Afanah is the co-founder of DCHQ, a Docker management solution focusing on enterprise application modeling, deployment, service discovery and lifecycle management. Prior to DCHQ, Amjad held senior product management positions at Oracle and VMware where he drove strategic products in cloud systems management and application deployment & management.

 

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • auto-scaling

    by Manuel Reyes,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    When an alert is raised indicating that the workload exceeds a pre-definded threshold, what would be the best way or how to implement the capability to spin up a server without human intervention?

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT