RedHat Microservices Architecture Developer Day London

| by Alex Blewitt Follow 4 Followers on Jun 15, 2015. Estimated reading time: 7 minutes |

On Thursday, RedHat ran a Microservices Architecture Development Day in London, aimed at demonstrating a set of open-source tools that can be combined to provide a microservice architecture. With Kubernetes being demonstrated to orchestrate Docker containers that are running Fabric8 and Apache Camel routes, along with logging and metrics, the combined presentation was a great introduction to the tools and techniques that are useful in building a distributed application.

James Strachan (@jstrachan, creator of Groovy and Jelly; but don't hold that against him) kicked off the day with a presentation entitled "Kubernetes for Java Developers." This covered the 2-pizza rule (a team that can't comfortably be fed around two pizzas is too large) as a useful size boundary for building and hosting microservices; additionally, that the same team be responsible for managing their service in the face of failure. It also highlighted the importance of building servers as cattle, not pets; if you're logging onto your server to do some configuration/software change, you're doing it wrong. By dockerising containers (with e.g. the maven-docker-plugin, covered later) it is possible to bring up a copy of the docker container/image – any changes required on the server should result in a new build and a new container image rather than a manual modification to an existing one.

He talked about Kubernetes organising compute units into pods, which can be associated with labels to identify their units, and services, which is a generic implementation of some kind of functionality. these can be co-ordinated with a replication controller to dictate how many instances should be available at one time. It's possible to schedule multiple containers as part of a service to a pod; they share the same local disk, network and so on. If one container dies, the entire pod is taken down and the pod/containers are brought up elsewhere. Service discovery (between containers in the same pod) uses environment variables to specify the host/port location; networking traffic between containers in the same pod is wired automatically. Going outside the pod requires public IP addresses being known and the use of an external service registry, which Strachan didn't demonstrate in the time available (and which Kubernetes expects to be provided by other systems in any case). Kubernetes scales inside a single data centre, but does not (yet) cope with cross-data centre or cross-region scheduling; existing environments segregate their runtime by data centre into multiple separate kubernetes clusters.

He then demonstrated Kubernetes running to orchestrate multiple containers over multiple hosts, including the ability to schedule on-demand scale-out of an existing application. By using a Kubernetes router (kube-proxy) on the individual hosts, it is possible to transparently forward a round-robin request against the set of existing servers. Provided that the application has stateless requests and/or state sharing between them (such as a message queue processing system) the proxy transparently forwards TCP/IP connections and data to a back-end service. Strachan used this to demonstrate scaling up an Apache Camel route allowing Kubernetes to go from 1 instance to 3 instances, and demonstrated the effect on killing a single instance and having Kubernetes restart it automatically.

The conclusion – and something he has blogged about recently – is that application containers are being displaced by Docker. By having a containerised image that can boot with Just Enough Java to run the application, it's not necessary to have a single monolithic application server providing everything. (This would be a theme revisited later in the day.)

Claus Ibsen (@davsclaus, author of Camel in Action) went into more details of presenting "Microservices with Apache Camel" (incuding a demo video part 1 and part 2). Using one of the camel-archetypes available in maven, Claus created a simple "Hello World" project to create a Camel route (in effect, a message-processing servlet that is transport agnostic). Once it was built, a docker image was created using the docker-maven-plugin and could then be launched in its own container. To connect it to the rest of the infrastructure, the example included a service port  that was provided by Kubernetes through the use of environment variable service discovery, and could be dynamically looked up using the container injected service using {{service::servicename}}.

Ibsen also showed example patterns from the Apache Camel Enterprise Integration Patterns page, a collection of design patterns for handling different message channels and processing engines.

Marc Savy and Kurt Stam (@marcsavy and @kurtstam) then talked about APIMan (complete with red tie, to go with the red hat), which is a framework for managing APIs. By providing a proxy/firewall in between clients and back-end services, APIMan can govern access based on request headers (e.g. API keys, credentials etc.) and route them through to back-end services hosted by Kubernetes. By using a centralised data store (they demontrated an ElasticSearch plugin, but others such as Redis should be possible as well) to store access rates, they could provide rate-limiting for certain API keys and offer different SLAs to different profiles. Provided that the traffic goes through APIMan (and that it provides the session encryption to the client, or the traffic is unencryped) it is possible to put traffic shaping for different services and clients, or even to implement a billing system. If this is injected between services then it can also be used to implement a fuse to prevent runaway services backing up traffic; however, the examples shown were as a gateway into the services ecoysystem rather than a gateway between them.

Arun Gupta (@arungupta) stepped in for James Rawlings talking about Microservice Design Patterns and demonstrated how to refactor an existing JavaEE application to Microservices (with code on GitHub). It was a great way of seeing an existing JavaEE applciation be transformed into a microservices based architecture, including how to hook services together and how to perform in-container testing of them.

Mark Little (@nmcl) took the day to lunch, talking about WildFly Swarm. I couldn't find links to the presentation, but Rawlings published a silent video showing how to use container builds to spawn Jenkins builds and have published Docker containers at the end. Little introduced the idea of WildFly Swarm which is Just Enough AppServer to run an application; by bundling the required contents of JBoss (e.g. Servlet support) whilst ignoring the unrequired parts (e.g. CORBA, an acronym this author hasn't seen on a slide for over a decade) into a single executable, allows a cut-down JEE application to e booted in a smaller amount of time. (When asked whether WildFly would support native JDK modules, Little suggested this might be possible when the JDK modules are finally released. It seems this is similar to WebSphere Liberty, which is aiming to reach Java EE 7 full platform certification next week.)

Jimmi Dyson (@jimmidyson) gave a passionate talk on "Logging and Metrics for Microservices" (including running the entire presentation in a Docker container, and having a docker shell midway through the presenation slides, which was a nice touch). His argument was instead of logging files to a local container filesystem (which may be transient and be difficult to find) that stdout and stderr are used for logging purposes. The container can then route this output to a centralised logging location, which makes it more efficient to find problems and to cross-link requests as they flow through multiple services. Unfortunately he was so passionate in talking about logging that the metrics part of hte deck we didn't actualy reach – but the idea is the same; report metrics to a central location so you can manage your cattle farm, rather than inspecting the state of each pet individually.

Roland Huß (@ro14nd) gave the penulitmate presentaion on the docker-maven-plugin, a utility that can build docker images as well as stop and start containers. This has since been forked and amended by many different users; of the ones that are left, those by Wouterd, Alexec, Spotify and Roland's own are the remaining ones. He pubished a (biased) piece on the relative merits of each at the shootout-docker-maven page.

There's a sample project available at docker-maven-sample which shows how the plugin works, and how it can be used to provide integration testing. By hooking up to an existing Kubernetes cluster, it's possible to write integration tests that spawn up a set of containers inside a cluster, wire them up, run the tests, and then tear them down again automatically.

There was a final presentation on FeedHenry by John Frizzelle, but it wasn't one that the author attended.

In summary, the RedHat developer day provided a great set of introductions to the world of containers, and how they can be integrated both at development and testing time to being orchestrated and managed over a fleet of machines. It's likely that RedHat will run this again in the future in different locations; look out for future events at RedHat Online.


Rate this Article

Adoption Stage

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Nothing new here :-/ by richard nicholson

Kuberenetes / Fabric 8 / the kitchen sinks seem to be a very Operational complex way of copying what the Paremus Service Fabric has been doing for years?! Yes - the Service Fabric is OSGi / Java (and for good reasons) - but this is implementation detail. In addition to Java / OSHi - it also provides elegantly support Docker, native processes, whatever.

Re: Nothing new here :-/ by James Strachan

I disagree with your subject for sure; Docker and Kubernetes are very new and significantly change how to provision and manage all software across compute nodes - whether using a linux distro like RHEL, Atomic, CoreOS or using an IaaS like OpenStack, a PaaS like OpenShift or using a public cloud like Google. Companies like Docker, Google, Red Hat and CoreOS are betting massively both these 2 technologies from OS -> IaaS -> PaaS -> cloud. They are both are certainly something new and very real.

I grok the OSGi-centric Service Fabric of Paremus; its kinda similar to the open source Fabric8 v1.x (which is a service fabric using a Java-centric back bone with ZooKeeper etc).

Though for fabric8 2.x we realised, Docker is increasingly the future of packaging and deploying all software (whether Java, nodejs, golang, ruby or whatever). Then increasingly most Linux, IaaS, PaaS and clouds will have kubernetes baked in (or something equivalent - e.g. DockerSwarm in EC2) so we figured the new future was better to reuse that stuff than reinvent it in Java in an OSGi centric way.

Re: Nothing new here :-/ by richard nicholson

James - it is for good reason that Paremus remain 100% committed to OSGi/Java in 2015. That said the Paremus Service Fabric can support any software artefact including Docker - but we do it without compromising the Fabric's architecturel principles.

I see Kuberenetes / Docker as just the latest in a long line of IT fashions. Kuberenetes does ape the Paremus Service Fabric concepts of Replication Handler and Resource Contracts (Replication Controllers and Labels) - but there is a long way to go.

I guess no surprise really as Paremus started in 2005 - so we have a head start ;)



Re: Nothing new here :-/ by James Strachan

Kubernetes is based on the Borg paper.

based on Google's last decade of provisioning billions of linux containers in production with a modest operations staff. I think they've learnt a thing or two on how to automatically provision and manage linux containers by now.

Though its good to see there's at least some similarities to Paremus. Maybe you should give Google a call to offer your services? :)

Re: Nothing new here :-/ by richard nicholson

Linux Containers sure. They don't understand the roll of Modularity though in building adaptive maintainable distributed systems. Check out the DARPA BRASS challenge for a nice statement of the problem. The problems of IT complexity & evolvability are NOT fixed by Linux Containers. Just like Virtual Machines were a dead end.

I would call them - but we're busy ;)

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Login to InfoQ to interact with what matters most to you.

Recover your password...


Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.


More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.


Stay up-to-date

Set up your notifications and don't miss out on content that matters to you