BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Building a CI System with Java 9 Modules and Vert.x Microservices

Building a CI System with Java 9 Modules and Vert.x Microservices

This item in japanese

Bookmarks

Key Takeaways

  • Java 9 and Vert.x microservices are compatible for building applications
  • Many Java libraries are still not available as modules
  • Some care is needed with "automatic modules" (libraries not packaged as modules yet)
  • Java's built-in Nashorn Javascript environment is useful for scripting Vert.x applications

This article walks you through the design and development of a modern, responsive, resilient and message driven continuous integration (CI) system using the Eclipse Vert.x toolkit. We will leverage the Java Platform Module System (JPMS)––prototyped under Project Jigsaw––to construct the application from a host of loosely coupled modules that communicate over well defined interfaces.

The intention is that JPMS should provide Java architects and developers with confidence that they can use modules to handle large legacy codebases as well as to create new applications. However, it’s not always easy to use existing Java libraries with the module system. So, along the way, we will talk about the various challenges encountered while working with the Java module system and the workarounds employed to get the system running.

Let’s begin by defining the minimum viable product (MVP) of this new CI system, which we are going to build as a Docker native system. The product should provide the following core features as REST APIs:

  1. Support CRUD on repositories. A repository represents a project under version control. It specifies the connection details to either a git repository or a perforce depot.
  2. Support the “pipeline as code” philosophy. A pipeline defines the workflow to build code artifacts. Pipelines are defined in JavaScript. The script file can be stored along with your codebase in the source code repository.
  3. Provide an API to start and stop pipelines. An instance of a given pipeline is what constitutes a build.

Now that we defined our MVP, we can go ahead and start building the system. The first step is to  create the project skeleton. We can use the IntelliJ multi module gradle project template for this purpose. Since we are going to use JDK 9, it would benefit us to use the latest and greatest gradle version (4.4.1 at time of writing). Seeing that we will be creating modular jars, we need to add the experimental jigsaw plugin and ensure that the source compatibility is set to Java 9. The project’s main “build.gradle” file should be similar to this snippet:

Like most systems, ours will have a common core library that hosts entity objects, utility classes, shared constants, query parsers and so forth. Let’s define this core library as a Java 9 module.

As noted earlier, a Java 9 module is a self-describing, named collection of interfaces, classes and resources. A new construct, “module-info.java”, has been added as part of JPMS to enable developers to define a module’s public contracts and its dependencies on other modules. We will use the aforementioned file to name our module, specify its dependencies on other modules and the packages that it will export for consumption by other modules in our application.

The following snippet shows how the core module is described using the module-info file:

Let’s go through the definition here. Every “module-info.java” file starts with the keyword “module” followed by the name of the module. The reverse domain name notation, primarily a naming convention for packages, can be employed to name the module.

We see two new keywords used within the code block —“exports” and “requires”. The “exports” keyword is used to declare the public packages exposed by our module - the public API of our module. The “requires” keyword is used to declare the module dependencies.

At this point, an obvious question is how to create Java 9 modules while depending on the various third-party non-modular libraries available in the wild. A concept called automatic modules was introduced for this purpose.

As the name suggests, non-modular jars are automatically converted into named modular jars based on the names of the jar files. The automatic name is deduced from the following algorithm: start with the jar file name, drop the extension, replace hyphens with dots and finally, drop the version number, if it exists. That is why the non-modular “vert-core-3.5.0.jar" library is specified as a dependency using “vertx.core” name. However, automatic modules deduced this way might not always work, as we will later see in an instance where we depend on Netty’s native transport libraries.

Similar to the core module, we need to define several other modules to handle database calls, user authentication, running the engine and communicating with various system plugins within the CI system.

Now that we’ve introduced some basic concepts of modular Java applications, let’s take a step back and talk about vert.x. This is a toolkit that provides non-blocking APIs that never block the calling thread. This non-blocking nature means that vert.x applications can handle a lot of concurrency using a small pool of threads. Vert.x is able to do this using a multi-reactor pattern.

Developers familiar with JavaScript can probably recall the single threaded event loop that delivers events as they arrive to registered handlers. The multi-reactor pattern is a related approach, but to overcome the inherent limitation of a single threaded event loop, vert.x employs multiple event loops based on the available cores on a given server.

Vert.x also comes with an opinionated concurrency model loosely based on the actor model where “actors” receive and respond to messages and communicate with others using messages. In the vert.x world, actors are called “verticles” and they typically communicate with each other using JSON messages sent over an event bus. We can even specify the number of instances of each verticle that should be deployed by vert.x.

The event bus is designed such that it can form a cluster using a variety of plug and play cluster managers like Hazelcast and Zookeeper. Each verticle, or a combination of verticles running on vert.x instances can be considered to be microservices. The combination of multi-reactor model, actor-like verticles, and the distributed nature of the event bus makes vert.x applications highly responsive, resilient and elastic—thereby upholding the reactive manifesto.

With that in mind, let’s look at the overall flow of our CI system:

As shown above, there are a host of verticles talking to each other over the vert.x event bus. Note that the plugins shown above are also verticles. The entry point to the CI system is through the server verticle. It is a public verticle in that it exposes a REST API. These API endpoints can be used by CLI and GUI clients to specify connection details to repositories, create pipelines and run those pipelines.

The following code excerpt will give us an idea on how we can define APIs and routes in vert.x:

We use vert.x’s web library to define the REST API and mount all the routes under “/api/v1/” base path. Vert.x provides a plethora of other libraries and features that enable us to rapidly develop reactive applications.

For instance, one can use the web API contract library to design an application’s API using OpenAPI 3 specification and let the library automatically handle request validation, and security validation. Vert.x’s OAuth library can be used to secure your application and API using OAuth providers of your choice like Google or Facebook or your own custom provider.

Referring back to the previous diagram, the engine verticle is responsible for coordinating the execution of a pipeline instance or build. When the REST API provided by the server verticle is invoked by a client to start a pipeline, the server verticle sends a message to the engine verticle. Upon receiving this message to start a pipeline execution, the engine would instantiate a new flow object.

The flow object is a simple state machine that is used to track the progress of the pipeline instance. At any given point, the flow object can be in one of these three states: setup, run and teardown. It can also transition to a new state based on incoming messages. In each one of these states, the flow object fires events and sends those events as messages over the event bus.

Registered plugins process these messages and send back the processing results, asynchronously, over the event bus. Here’s a code excerpt that showcases how we can register message handlers to start a pipeline, create the flow object and process incoming messages from plugins:

The engine vertical is also responsible for locating and deploying plugins or verticles. We use the service loader mechanism of Java, introduced in Java 6 and revised in Java 9, to locate and deploy the plugins during the server startup. To understand service loading, we need to talk about services and service providers.

A service is a well known interface or class (usually abstract). A service provider is a concrete implementation of a service. The ServiceLoader class is a facility to load service providers that implement a given service. Now, a module can declare that it uses a specific service. The module can then use the ServiceLoader to locate and load the service providers deployed in the run time environment.

For example, the server module can declare that it uses the Plugin interface, and the workspace module can declare that it provides two services that implement the Plugin contract as described in their corresponding “module-info.java” files:

Thus, when the server module starts, it invokes the ServiceLoader and receives these two plugins which it then deploys as verticles:

The plugins perform the bulk of the work. A plugin registers message handlers to process specific events of a pipeline flow in which it is interested. For example, the workspace plugin is responsible for code synchronization from git. The script parser plugin is responsible for scanning the workspace to find the pipeline script file––written in JavaScript––and executing the code that describes how to run the pipeline. The outcome is several shell commands which are executed by the script runner plugin inside a docker container. It is possible to run JavaScript since vert.x leverages Nashorn, Java’s built-in JavaScript engine. Note that vert.x is a polyglot toolkit that supports languages like JavaScript, Kotlin and Groovy as well.

Here’s a snippet from a pipeline script:

 

Based on the messages it receives, the script runner plugin downloads docker images, creates containers and executes shell commands inside those containers. Evidently, this plugin has to interact with a docker engine.

 

Docker REST APIs are typically exposed on http over unix domain sockets as opposed to traditional http(s) over tcp sockets. This is where vert.x shines. Instead of using an off-the-shelf jar that executes blocking code to talk to the docker engine, we can use vert.x’s asynchronous clients to interact with docker.

Vert.x leverages native transports when it sees the presence of native transport libraries provided by Netty. It is made available when we add the native transport dependencies like “netty-transport-native-kqueue” in both the “build.gradle” file and the “module-info.java” file.

One caveat that will soon be addressed, hopefully in the next release, is vert.x’s lack of support for http over unix domain sockets. Temporarily, we can make some minor code modifications to the vert.x core library and build it ourselves to work around this issue. The plugin code that communicates with the docker engine would look something like this:

Adding a non-modular jar like “netty-transport-native-kqueue-4.1.15.Final-osx-x86_64.jar” as a dependency will result in the creation of an automatic module name. However, since the jar file contains a Java reserved keyword “native”, our modular application will fail to compile.

While Netty’s creators are addressing this in their next release, we can bypass this issue by adding an “Automatic-Module-Name” entry to the jar’s manifest file. To do that, we have to first unpack the jar to a folder and cd into it. Then, we have to modify the “MANIFEST.MF” file to add this entry “Automatic-Module-Name: io.netty.transport.kqueue”. Next, we create the jar by executing the following command:

We can verify that the automatic module name that we specified in the manifest file is now being recognized by java by running the following command:

 

We have to use similar commands to fix other non-modular jars whose automatic module names collide with Java’s reserved keywords.

 

We have now reached the point where we are ready to build and run our CI system. Here’s the command to run our application:

To support JPMS, new options have been added to existing command tools like “javac” and “java”. These options tell the java compiler and runtime to utilize the module path and modular jars as opposed to the age old classpath. A few noteworthy options:

"-p" or "--module-path" is used to tell java to look into specific folders that contain java modules.

"-m" or "--module" is used to specify the module and the main class used to start the application.

In this article we’ve designed a modular microservices based application using the vert.x toolkit.  From our design, we’ve built a docker native CI system using JPMS and JDK 9. Head over to GitHub to grab the code and see in more detail how vert.x and modules fit together to build a small, self-contained modular Java application.

About the Author

Uday Tatiraju is a principal engineer at Oracle with nearly a decade of experience in ecommerce platforms, search engines, backend systems, and web and mobile programming.

Rate this Article

Adoption
Style

BT