Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News RedHat Microservices Architecture Developer Day London

RedHat Microservices Architecture Developer Day London

On Thursday, RedHat ran a Microservices Architecture Development Day in London, aimed at demonstrating a set of open-source tools that can be combined to provide a microservice architecture. With Kubernetes being demonstrated to orchestrate Docker containers that are running Fabric8 and Apache Camel routes, along with logging and metrics, the combined presentation was a great introduction to the tools and techniques that are useful in building a distributed application.

James Strachan (@jstrachan, creator of Groovy and Jelly; but don't hold that against him) kicked off the day with a presentation entitled "Kubernetes for Java Developers." This covered the 2-pizza rule (a team that can't comfortably be fed around two pizzas is too large) as a useful size boundary for building and hosting microservices; additionally, that the same team be responsible for managing their service in the face of failure. It also highlighted the importance of building servers as cattle, not pets; if you're logging onto your server to do some configuration/software change, you're doing it wrong. By dockerising containers (with e.g. the maven-docker-plugin, covered later) it is possible to bring up a copy of the docker container/image – any changes required on the server should result in a new build and a new container image rather than a manual modification to an existing one.

He talked about Kubernetes organising compute units into pods, which can be associated with labels to identify their units, and services, which is a generic implementation of some kind of functionality. these can be co-ordinated with a replication controller to dictate how many instances should be available at one time. It's possible to schedule multiple containers as part of a service to a pod; they share the same local disk, network and so on. If one container dies, the entire pod is taken down and the pod/containers are brought up elsewhere. Service discovery (between containers in the same pod) uses environment variables to specify the host/port location; networking traffic between containers in the same pod is wired automatically. Going outside the pod requires public IP addresses being known and the use of an external service registry, which Strachan didn't demonstrate in the time available (and which Kubernetes expects to be provided by other systems in any case). Kubernetes scales inside a single data centre, but does not (yet) cope with cross-data centre or cross-region scheduling; existing environments segregate their runtime by data centre into multiple separate kubernetes clusters.

He then demonstrated Kubernetes running to orchestrate multiple containers over multiple hosts, including the ability to schedule on-demand scale-out of an existing application. By using a Kubernetes router (kube-proxy) on the individual hosts, it is possible to transparently forward a round-robin request against the set of existing servers. Provided that the application has stateless requests and/or state sharing between them (such as a message queue processing system) the proxy transparently forwards TCP/IP connections and data to a back-end service. Strachan used this to demonstrate scaling up an Apache Camel route allowing Kubernetes to go from 1 instance to 3 instances, and demonstrated the effect on killing a single instance and having Kubernetes restart it automatically.

The conclusion – and something he has blogged about recently – is that application containers are being displaced by Docker. By having a containerised image that can boot with Just Enough Java to run the application, it's not necessary to have a single monolithic application server providing everything. (This would be a theme revisited later in the day.)

Claus Ibsen (@davsclaus, author of Camel in Action) went into more details of presenting "Microservices with Apache Camel" (incuding a demo video part 1 and part 2). Using one of the camel-archetypes available in maven, Claus created a simple "Hello World" project to create a Camel route (in effect, a message-processing servlet that is transport agnostic). Once it was built, a docker image was created using the docker-maven-plugin and could then be launched in its own container. To connect it to the rest of the infrastructure, the example included a service port  that was provided by Kubernetes through the use of environment variable service discovery, and could be dynamically looked up using the container injected service using {{service::servicename}}.

Ibsen also showed example patterns from the Apache Camel Enterprise Integration Patterns page, a collection of design patterns for handling different message channels and processing engines.

Marc Savy and Kurt Stam (@marcsavy and @kurtstam) then talked about APIMan (complete with red tie, to go with the red hat), which is a framework for managing APIs. By providing a proxy/firewall in between clients and back-end services, APIMan can govern access based on request headers (e.g. API keys, credentials etc.) and route them through to back-end services hosted by Kubernetes. By using a centralised data store (they demontrated an ElasticSearch plugin, but others such as Redis should be possible as well) to store access rates, they could provide rate-limiting for certain API keys and offer different SLAs to different profiles. Provided that the traffic goes through APIMan (and that it provides the session encryption to the client, or the traffic is unencryped) it is possible to put traffic shaping for different services and clients, or even to implement a billing system. If this is injected between services then it can also be used to implement a fuse to prevent runaway services backing up traffic; however, the examples shown were as a gateway into the services ecoysystem rather than a gateway between them.

Arun Gupta (@arungupta) stepped in for James Rawlings talking about Microservice Design Patterns and demonstrated how to refactor an existing JavaEE application to Microservices (with code on GitHub). It was a great way of seeing an existing JavaEE applciation be transformed into a microservices based architecture, including how to hook services together and how to perform in-container testing of them.

Mark Little (@nmcl) took the day to lunch, talking about WildFly Swarm. I couldn't find links to the presentation, but Rawlings published a silent video showing how to use container builds to spawn Jenkins builds and have published Docker containers at the end. Little introduced the idea of WildFly Swarm which is Just Enough AppServer to run an application; by bundling the required contents of JBoss (e.g. Servlet support) whilst ignoring the unrequired parts (e.g. CORBA, an acronym this author hasn't seen on a slide for over a decade) into a single executable, allows a cut-down JEE application to e booted in a smaller amount of time. (When asked whether WildFly would support native JDK modules, Little suggested this might be possible when the JDK modules are finally released. It seems this is similar to WebSphere Liberty, which is aiming to reach Java EE 7 full platform certification next week.)

Jimmi Dyson (@jimmidyson) gave a passionate talk on "Logging and Metrics for Microservices" (including running the entire presentation in a Docker container, and having a docker shell midway through the presenation slides, which was a nice touch). His argument was instead of logging files to a local container filesystem (which may be transient and be difficult to find) that stdout and stderr are used for logging purposes. The container can then route this output to a centralised logging location, which makes it more efficient to find problems and to cross-link requests as they flow through multiple services. Unfortunately he was so passionate in talking about logging that the metrics part of hte deck we didn't actualy reach – but the idea is the same; report metrics to a central location so you can manage your cattle farm, rather than inspecting the state of each pet individually.

Roland Huß (@ro14nd) gave the penulitmate presentaion on the docker-maven-plugin, a utility that can build docker images as well as stop and start containers. This has since been forked and amended by many different users; of the ones that are left, those by Wouterd, Alexec, Spotify and Roland's own are the remaining ones. He pubished a (biased) piece on the relative merits of each at the shootout-docker-maven page.

There's a sample project available at docker-maven-sample which shows how the plugin works, and how it can be used to provide integration testing. By hooking up to an existing Kubernetes cluster, it's possible to write integration tests that spawn up a set of containers inside a cluster, wire them up, run the tests, and then tear them down again automatically.

There was a final presentation on FeedHenry by John Frizzelle, but it wasn't one that the author attended.

In summary, the RedHat developer day provided a great set of introductions to the world of containers, and how they can be integrated both at development and testing time to being orchestrated and managed over a fleet of machines. It's likely that RedHat will run this again in the future in different locations; look out for future events at RedHat Online.


Rate this Article