Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Running Axon Server - CQRS and Event Sourcing in Java

Running Axon Server - CQRS and Event Sourcing in Java

Leia em Português

Key Takeaways

  • CQRS and Event Sourcing require specific infrastructure support for storage (the Event Store) and transmission (the Messaging Hub) of Commands, Queries, and Events.
  • The variety in messaging patterns used for supporting CQRS and Event Sourcing can be supported with a combination of existing middleware tools, such as Kafka and AMQP, but all-in-one solutions such as Axon Server are an attractive alternative.
  • Getting Axon Server up and running is best done in a way that makes the installation itself "stateless", in the sense that we split the configuration into fixed and environment-specific settings.
  • Axon Server Enterprise Edition adds the ability to run a cluster, but individual nodes can have pretty specific identities with large differences in the provided services. This impacts deployment strategies.
  • Adding Access Control (tokens and user accounts) and TLS to Axon Server is pretty easy to accomplish.

CQRS and Event Sourcing in Java

Modern message-passing and event-driven applications have requirements that differ significantly from traditional enterprise applications. A definite sign is the shift in focus from guaranteed delivery of individual messages, where the responsibility for delivery is mostly on the middleware, to “smart endpoints and dumb pipes,” where it is up to the application to monitor delivery and punctuality. A result is that the Quality of Service requirements on the messaging infrastructure are more about throughput than delivery guarantees; a message not reaching its intended target is something sender as well as receiver must solve, as they are fundamentally responsible for the business requirements impacted by such a failure, and are best capable of determining a proper response.

Another change happening is a direct consequence of the steadily decreasing price of storage and processing power, as well as the increased flexibility in assigning those resources to application components: Command Query Responsibility Segregation, shortened to CQRS, and Event Sourcing. Rather than having a single component in your application landscape that manages the “Golden Record” for updates as well as queries, those two responsibilities are pulled apart, and multiple sources for querying are provided. The Command components, generally the low-frequency components of the equation, can optimize for validation and storage. Validated changes are then announced to the rest of the enterprise using Events, where (multiple) Query components use them to build optimized models. The increased usage of forward caches and batched copies were an early warning sign that this architectural pattern was desperately needed and query models with replayable Event Stores formalize many of the solutions needed here. Event Sourcing advances on this by defining the current state of an entity using the sequence of events that led to it. This means that, rather than keeping an updatable store of records, we use an append-only event store, allowing us to use Write-Once semantics and gain an incorruptible audit trail at the same time.

To support these changes we see both traditional components as databases and message oriented middleware extended with the required functionality, as well as new, purpose built,  infrastructure components. In the world of Java and Kotlin software development the Open Source Axon Framework provides the leading implementation of both the CQRS and the Event Sourcing paradigms, but it only provides a solution for the individual modules of the application. If you keep the application together in a monolithic setup, which admittedly provides the quickest way to get up and running for a greenfield development effort, it feels like a waste not to be able to take advantage of its support for a more distributed architecture. In itself the architecture of an Axon-based application lends itself to being split up quite easily, or “strangled”, as has become the more popular term. The question is then how we can support the messaging and event store implementations.

The architecture of an CQRS based application

Typical CQRS applications have components exchanging commands and events, with persistence of the aggregates handled via explicit commands, and query models optimized for their usage and built from the events that report on the aggregate’s state. The aggregate persistence layer in this setup can be built on RDBMS storage layers or NoSQL document components, and standard JPA/JDBC based repositories are included in the framework core. The same holds for storage of the query models.

The communication for exchanging the messages can be solved with most standard messaging components, but the usage patterns do favour specific implementations for the different scenarios. We can use pretty much any modern messaging solution for the publish-subscribe pattern, as long as we can ensure no messages are lost, because we want the query models to faithfully represent the aggregate’s state. For commands we need to extend the basic one-way messaging into a request-reply pattern, if only to ensure we can detect the unavailability of command handlers. Other replies might be the resulting state of the aggregate, or a detailed validation failure report if the update was vetoed. On the query side, simple request-reply patterns are not sufficient for a distributed microservices architecture; we also want to look at scatter-gather and first-in patterns, as well as streamed results with continuing updates.

For Event Sourcing the aggregate persistence layer can be replaced with a Write-Once-Read-Many layer that captures all events resulting from the commands, and provides replay support for specific aggregates.Those same replays can be used for the query models, allowing us to either re-implement them using memory stores, or provide resynchronization if we suspect data inconsistency. A useful improvement is the use of snapshots, so we can prevent the need for replaying a possibly long history of changes, and the Axon Framework provides a standard snapshotting implementation for aggregates.

If we now take a look at what we have collected in infrastructure components for our application, we need the following:

  • A “standard” persistence implementation for state-stored aggregates and query models.
  • A Write-Once-Read-Many persistence solution for Event-Sourced aggregates.
  • A messaging solution for:
    • Request-reply
    • Publish-subscribe with at-least-once delivery
    • Publish-subscribe with replay
    • Scatter-gather
    • Request-streaming-reply

The Axon Framework provides additional modules to integrate a range of Open Source products, such as  Kafka and AMQP based solutions for event distribution. However, unsurprisingly, AxonIQ’s own Axon Server can also be used as a one-in-all solution. This article series is about what you need to do to get it installed and running, starting with a simple local installation, and progressing to Docker based installations (including docker-compose and Kubernetes) and VMs “in the Cloud”.

Setting up the test

To start, let’s consider a small program to demonstrate the components we’re adding to our architecture. We’ll use Spring Boot for the ease of configuration, and Axon has a Spring Boot starter that will scan for the annotations we use. As a first iteration, we’ll keep it to just a simple application that sends a command that causes an event. For this to work we need to handle the command:

    public void processCommand(TestCommand cmd) {"handleCommand(): src = '{}', msg = '{}'.",
                 cmd.getSrc(), cmd.getMsg());

        final String eventMsg = cmd.getSrc() + " says: " + cmd.getMsg();
        eventGateway.publish(new TestEvent(eventMsg));

The command and event here are simple value objects, the first specifying the source and a message, the other only a message. The same class also defines the event handler that will receive the event published above:

    public void processEvent(TestEvent evt) {"handleEvent(): msg = '{}'.", evt.getMsg());

To complete this app we need to add a “starter” that sends the command:

    public CommandLineRunner getRunner(CommandGateway gwy) {
        return (args) -> {
            gwy.send(new TestCommand("getRunner", "Hi there!"));

For this first version we also need a bit of supporting code to compensate for the lack of an actual aggregate as CommandHandler, because the Axon Framework wants every command to have some kind of identification that allows subsequent commands for the same receiver to be correlated.The full code is available for download on GitHub, and apart from the above it contains the TestCommand and TestEvent classes, and configures a routing strategy based on random keys, effectively telling Axon not to bother. This needs to be configured on the CommandBus implementation, and here we have to start looking at implementations for our architectural components.

If we run the application without any specific commandbus and eventbus implementations, the Axon runtime will assume a distributed setup based on Axon Server, and attempt to connect to it. Axon Server Standard Edition is freely available under the AxonIQ Open Source license, and a precompiled package of the Axon Framework and Server can be obtained from the AxonIQ website. If you put the executable JAR file in its own directory and run it using Java 11, it will start using sensible defaults. Please note that the runs below use version “4.3”, your situation may differ depending on when you download it.

$ unzip
$ mkdir server-se
$ cp axonquickstart-4.3/AxonServer/axonserver-4.3.jar server-se/axonserver.jar
$ cp axonquickstart-4.3/AxonServer/axonserver-cli-4.3.jar server-se/axonserver-cli.jar
$ cd server-se
$ chmod 755 *.jar
$ java -version
openjdk version "11.0.6" 2020-01-14
OpenJDK Runtime Environment (build 11.0.6+10-post-Ubuntu-1ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Ubuntu-1ubuntu118.04.1, mixed mode, sharing)
$ ./axonserver.jar
     _                     ____
    / \   __  _____  _ __ / ___|  ___ _ ____   _____ _ __
   / _ \  \ \/ / _ \| '_ \\___ \ / _ \ '__\ \ / / _ \ '__|
  / ___ \  >  < (_) | | | |___) |  __/ |   \ V /  __/ |
 /_/   \_\/_/\_\___/|_| |_|____/ \___|_|    \_/ \___|_|
 Standard Edition                        Powered by AxonIQ

version: 4.3
2020-02-20 11:56:33.761  INFO 1687 --- [           main] io.axoniq.axonserver.AxonServer          : Starting AxonServer on arrakis with PID 1687 (/mnt/d/dev/AxonIQ/running-axon-server/server/axonserver.jar started by bertl in /mnt/d/dev/AxonIQ/running-axon-server/server)
2020-02-20 11:56:33.770  INFO 1687 --- [           main] io.axoniq.axonserver.AxonServer          : No active profile set, falling back to default profiles: default
2020-02-20 11:56:40.618  INFO 1687 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8024 (http)
2020-02-20 11:56:40.912  INFO 1687 --- [           main] A.i.a.a.c.MessagingPlatformConfiguration : Configuration initialized with SSL DISABLED and access control DISABLED.
2020-02-20 11:56:49.212  INFO 1687 --- [           main] io.axoniq.axonserver.AxonServer          : Axon Server version 4.3
2020-02-20 11:56:53.306  INFO 1687 --- [           main] io.axoniq.axonserver.grpc.Gateway        : Axon Server Gateway started on port: 8124 - no SSL
2020-02-20 11:56:53.946  INFO 1687 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8024 (http) with context path ''
2020-02-20 11:56:53.948  INFO 1687 --- [           main] io.axoniq.axonserver.AxonServer          : Started AxonServer in 21.35 seconds (JVM running for 22.317)

With that in place, our test application starts up, connects to Axon Server, and runs the test.

...handleCommand(): src = "getRunner", msg = "Hi there!".
...handleEvent(): msg = "getRunner says: Hi there!".

For good measure, run it a few times, and if you’re lucky, you may actually see more than one event handled. If it doesn’t, add a “Thread.sleep(10000)” between the sending of the command and the call to “SpringApplication.exit()” and try again. This small test shows that we had our client app (let’s call it that, since it is a client of Axon Server, and that is where we’re going) connecting to Axon Server, and used it to send the command to the handler, after which it sends it back to the client for handling. The handler sent an event, which went the same route, albeit on the EventBus rather than the CommandBus. This event was stored in Axon Server’s Event Store, and the Event Handler will get all events replayed when it connects initially. Actually, if you append the current date and time, say by simply appending a “new Date()” to the message, you’ll see that the events are in fact nicely ordered as they came in.

From an Axon Framework perspective there are two kinds of Event processors: Subscribing and Tracking. A Subscribing Event Processor will subscribe itself to a stream of events, starting at the moment of subscribing. A Tracking Event Processor instead tracks its own progress in the stream and will by default start with requesting a replay of all events in the store. You can also see this as getting the events pushed (for Subscribing Event processors) versus pulling events yourself (for Tracking Event Processors), and the implementation in the Framework actually works this way. The two most important applications of this difference to be aware of are for building query models and for Event Sourced aggregates. This is because it is there that you want to be sure to have the complete history of events. We’ll not go into the details here and now, but you can read about it in the Axon Reference Guide at

In our test program, we can add a configuration to select the Event processor type:

public class SubscribingEventProcessorConfiguration {

    private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());

    public void configure(EventProcessingConfigurer config) {"Setting using Subscribing event processors.");

With this, if you start the application with Spring profile “subscribing” active, you’ll see only one event processed, which will be the one sent in that run. Start the program without this profile and you get the default mode, which is tracking, and all prior events (including those sent while in subscribing mode) will be there again.

Running and Configuring Axon Server

Now that we have a client ready to use, let’s take a bit more detailed look at the Axon Server side of things, as we expect things here concerning both the message handling and the event storage. In the previous section we created a directory named “axonserver-se” and put the two JAR files for it in there. Now, while it is running you’ll see it has generated a “PID” file, containing the process ID, and a “data” directory with a database file and the event store in a directory named “default”. For the moment, all the event store contains is one file for event storage, and another for snapshots. These files will appear pretty large, but they are “sparse” in the sense that they have been pre-allocated to ensure availability of room, while as yet containing only a few events. What we might have expected, but not seen, is a log file with a copy of Axon Server’s output and that is the first thing we’ll remedy.

Axon Server is a Spring Boot based application, and that allows us to easily add logging settings. The name of the default properties file is “”, so if we create a file with that name and place it in the directory where Axon Server runs, the settings will be picked up. Spring Boot also looks at a directory named “config” in the current working directory, so if we want to create a scripted setup, we can put a file with common settings in the intended working directory, while leaving the “config/” file for customizations. The simplest properties needed for logging are those provided by all Spring-boot applications:


Using these, after the initial banner, logging will be sent to “axonserver.log”, and it will keep at most 10 files of at most 10MiB in size, which cleans up nicely. Next let’s identify some other properties that we might want to define as “common”:

    This will give the event store its own directory because we can expect it to be the one growing continuously, and we don’t want other applications to be impacted by a possible “disk full” situation. We can use disk mounts or symbolic links to let this location point to the volume where we actually want to use disk space, but this gives us the possibility to at least make the configuration common.
    To reduce the number of events in a replay, we can create snapshots. By default, they will be stored under the same location as the events themselves, but we can separate them if we want. Since they are tightly linked however we’ll keep them together.
  • axoniq.axonserver.controldb-path=./data
    We’ll leave the ControlDB in its default location, and can use mounts or symlinks to put it on a separate volume. The ControlDB generally won’t take much room, so we can give it its own location, without worrying about disk usage too much.
    As we saw, the PID file is by default generated in the current working directory of Axon Server. By changing it to the same location as the ControlDB, we have a single spot for relatively small files, while making the current working directory itself essentially Read-Only.
  • logging.file=./data/axonserver.log
    This one is highly dependent on how strict you want log files to be separated from the rest of the files, as you could also opt to give Axon Server a directory under /var/log, and add settings for log rotation, or even use something like “logging.config=logback-spring.xml” and use that for more detailed settings.
  • axoniq.axonserver.replication.log-storage-folder=./log
    This is an Axon Server Enterprise Edition setting for the replication log, which stores the changelog for data distributed to the other nodes in a cluster. The amount of data here is configurable in the sense that you can set the interval for its cleaning when all committed changes will be removed from the log.

With these settings, we have structured the way Axon Server will use disk space and set it up so we can use network or cloud storage, in such a way that we’re prepared for deploying it in a CI/CD pipeline. In the repo, I will also add startup and shutdown scripts that will run Axon Server in the background.

Protecting our setup

Since we definitely “need protection”, we’ll set up access control and TLS on our server. Access control will make a token required for requests to the REST and gRPC endpoints, and the UI will require an account. Additionally, some features require specific roles, where Enterprise Edition has a more elaborate set of roles and additionally allows roles to be specified per context. To start with Standard Edition, we can enable access control by setting a flag in the properties file and providing the token:


You can use command line tools such as uuidgen to generate random tokens, which will be used for authentication. Now if you start Axon Server with these,  not only will you need to specify the token to the CLI tool, but also the UI will suddenly require you to login, even though we haven’t created any users yet. We can solve that one easily using the CLI tool:

$ ./axonserver-cli.jar register-user -t my-token -u admin -p test -r ADMIN
$ ./axonserver-cli.jar users -t my-token

With this in place you can log in again. Additionally, if you want to make life a bit easier for the CLI, you can create a directory named “security” and copy the token to a file named “.token” in there. The CLI will check for such a directory and file relative to the current working directory:

$ mkdir security
$ echo my-token > security/.token
$ chmod 400 security/.token && chmod 500 security
$ ./axonserver-cli.jar users

On the client side we need to specify the token as well:

$ axonserver-quicktest-4.3-SNAPSHOT-exec.jar
2020-04-23 09:46:10.914  WARN 1438 --- [           main] o.a.a.c.AxonServerConnectionManager      : Connecting to AxonServer node [localhost]:[8124] failed: PERMISSION_DENIED: No token for io.axoniq.axonserver.grpc.control.PlatformService/GetPlatformServer
*                                            *
*                                            *
* Are you sure it's running?                 *
* Don't have Axon Server yet?                *
* Go to:          *
*                                            *

To suppress this message, you can
 - explicitly configure an AxonServer location,
 - start with -Daxon.axonserver.suppressDownloadMessage=true
2020-04-23 09:46:10.943  WARN 1438 --- [.quicktester]-0] o.a.e.TrackingEventProcessor             : Fetch Segments for Processor 'io.axoniq.testing.quicktester' failed: No connection to AxonServer available. Preparing for retry in 1s
2020-04-23 09:46:10.999  WARN 1438 --- [           main] o.a.c.gateway.DefaultCommandGateway      : Command 'io.axoniq.testing.quicktester.msg.TestCommand' resulted in org.axonframework.axonserver.connector.command.AxonServerCommandDispatchException(No connection to AxonServer available)
$ AXON_AXONSERVER_TOKEN=my-token axonserver-quicktest-4.3-SNAPSHOT-exec.jar
2020-04-23 09:46:48.287  INFO 1524 --- [mandProcessor-0] i.a.testing.quicktester.TestHandler      : handleCommand(): src = "QuickTesterApplication.getRunner", msg = "Hi there!".
2020-04-23 09:46:48.352  INFO 1524 --- [.quicktester]-0] i.a.testing.quicktester.TestHandler      : handleEvent(): msg = "QuickTesterApplication.getRunner says: Hi there!".


Given this, the next step is to add TLS, and we can do this with a self-signed certificate as long as we’re running locally. We can use the “openssl” toolset to generate an X509 certificate in PEM format to protect the gRPC connection, and then package key and certificate in a PKCS12 format keystore for the HTTP port. The following will:

  1. Generate a Certificate Signing Request using an INI style configuration file, which allows it to work without user interaction. This will also generate an unprotected private key of 2048 bits length, using the RSA algorithm.
  2. Use this request to generate and sign the certificate, which will be valid for 365 days.
  3. Read both key and certificate, and store them in a PKCS12 keystore, under alias “axonserver”. Because this cannot be an unprotected store, we give it password “axonserver”.
$ cat > csr.cfg <<EOF
[ req ]

[ req_distinguished_name ]
O="My Company"
$ openssl req -config csr.cfg -new -newkey rsa:2048 -nodes -keyout tls.key -out tls.csr
Generating a 2048 bit RSA private key
writing new private key to 'tls.key'
$ openssl x509 -req -days 365 -in tls.csr -signkey tls.key -out tls.crt
Signature ok
subject=/C=NL/ST=Province/L=City/O=My Company/
Getting Private key
$ openssl pkcs12 -export -out tls.p12 -inkey tls.key -in tls.crt  -name axonserver -passout pass:axonserver

We now have:

  • tls.csr: The certificate Signing Request, which we no longer need.
  • tls.key: The private key in PEM format.
  • tls.crt: The certificate in PEM format.
  • tls.p12: The keystore in PKCS12 format.

To configure these in Axon Server, we use:

# SSL for the HTTP port

# SSL enabled for gRPC

The difference between the two approaches stems from the runtime support used: the HTTP port is provided by Spring-boot using its own “server” prefixed properties, and it requires a PKCS12 keystore. The gRPC port instead is set up using Google’s libraries, which want PEM encoded certificates. With these added to “” we can restart Axon Server, and it should now announce “Configuration initialized with SSL ENABLED and access control ENABLED”. On the client side we need to tell it to use SSL, and, because we’re using a self-signed certificate, we have to pass that too:


Please note that I have added “” as hostname to the system’s “hosts” file, so other applications can find it and the name matches the one in the certificate. With this, our quick tester can connect using TLS (removing timestamps and such):

...Connecting using TLS...
...Requesting connection details from
...Reusing existing channel
...Re-subscribing commands and queries
...Creating new command stream subscriber
...Worker assigned to segment Segment[0/0] for processing
...Using current Thread for last segment worker: TrackingSegmentWorker{processor=io.axoniq.testing.quicktester, segment=Segment[0/0]}
...Fetched token: null for segment: Segment[0/0] stream: 0
...Shutdown state set for Processor 'io.axoniq.testing.quicktester'.
...Processor 'io.axoniq.testing.quicktester' awaiting termination...
...handleCommand(): src = "QuickTesterApplication.getRunner", msg = "Hi there!".
...handleEvent(): msg = "QuickTesterApplication.getRunner says: Hi there!".
...Released claim
...Worker for segment Segment[0/0] stopped.
...Closed instruction stream to [axonserver]
...Received completed from server.

So how about Axon Server EE

From an operations perspective, running Axon Server Enterprise Edition is not that different from Standard Edition, with as most prominent differences:

  • You can have a multiple instances working as a cluster,
  • The cluster supports more than one context (in SE you only have “default”),
  • Access control has a more detailed set of roles,
  • And applications get their own tokens and authorizations.

On the connectivity side, we get an extra gRPC port used for communication between the nodes in the cluster, which defaults to port 8224.

A cluster of Axon Server nodes will provide multiple connection points for (Axon Framework-based) client applications, and thus share the load of managing message delivery and event storage. All nodes serving a particular context maintain a complete copy, with a “context leader” in control of the distributed transaction. The leader is determined by elections, following the RAFT protocol. In this article we will not dive into the details of RAFT and how it works, but an important consequence has to do with those elections: nodes need to be able to win them, or at least feel the support of a clear majority. So while an Axon Server cluster does not need to have an odd number of nodes, every individual context does, to preclude the possibility of a tie in an election. This also holds for the internal context named “_admin”, which is used by the admin nodes and stores the cluster structure data. As a consequence most clusters will have an odd number of nodes, and will keep functioning as long as a majority (for a particular context) is responding and storing events.

Axon Server Clustering

A node in an Axon Server cluster can have different roles in a context:

  • A “PRIMARY” node is a fully functional (and voting) member of that context. A majority of primary nodes is needed for a context to be available to client applications.
  • A “MESSAGING_ONLY” member will not provide event storage, and (as it is not involved with the transactions) is a non-voting member of the context.
  • An “ACTIVE_BACKUP” node is a voting member which provides an event store, but it does not provide the messaging services, so clients will not connect to it. Note that you must have at least one active backup node that needs to be up if you want a guarantee that you have up-to-date backups.
  • Lastly, a “PASSIVE_BACKUP” will provide an Event Store, but not participate in transactions or even elections, nor provide messaging services. It being up or down will never influence the availability of the context, and the leader will send any events accumulated during maintenance, as soon as it comes back online.

From the perspective of a backup strategy, the active backup can be used to keep an offsite copy, which is always up-to-date. If you have two active backup nodes you can stop Axon Server on one of them to make a backup of the event store files, while the other will continue receiving updates. The passive backup node provides an alternative strategy, where the context leader will send updates asynchronously. While this does not give you the guarantee that you are always up-to-date, the events will eventually show up, and even with a single backup instance you can bring Axon Server down and make file backups without affecting the cluster availability. When it comes back online the leader will immediately start sending the new data.

A consequence of the support for multiple contexts and different roles, each settable per node, is that those individual nodes can have pretty big differences in the services they provide to the client applications. In that case increasing the number of nodes does not have the same effect on all contexts: although the messaging load will be shared by all nodes supporting a context, the Event Store has to distribute the data to an additional node, and a majority needs to acknowledge storage before the client can continue. Another thing to remember is that the “ACTIVE_BACKUP” and “PASSIVE_BACKUP” roles have pretty (Raft) specific meanings, even though the names may suggest different interpretations from the world of High Availability. In general, an Axon Server node’s role does not change just to solve an availability problem. The cluster can keep functioning as long as a majority of the nodes is available for a context, but if this majority is lost for the “_admin” context, cluster configuration changes cannot be committed either.

For a local running cluster, we need to make a few additions to our “common” set of properties, the most important of which concern cluster initialization: When a node starts, it does not yet know if it will become the nucleus of a new cluster, or will be added to an existing cluster with a specific role. So if you start Axon Server EE and immediately start connecting client applications, you will receive an error message indicating that there is no initialized cluster available. If you just want a cluster with all nodes registered as “PRIMARY”, you can add the autocluster properties:

With these added, the node whose hostname and cluster-internal port matches the “first” setting, with no port specified of course defaulting to 8224, will initialize “default” and “_admin” contexts if needed. The other nodes will use the specified hostname and port to register themselves to the cluster, and request to be added to the given contexts. A typical solution for starting a multi-node cluster on a single host is to use the port properties to have them expose themselves next to each other. The second node would then use:


The third can use 8026, 8126, and 8226. In the next installment we’ll be looking at Docker deployments, and we’ll also customize the hostname used for the cluster-internal communication.

Access control for the UI and Client Applications

Maybe a little explanation is needed around enabling access control, especially from the perspective of the client. As mentioned above, the effect is that client applications must provide a token when connecting to Axon Server. This token is used for both HTTP and gRPC connections, and Axon Server uses a custom HTTP header named “AxonIQ-Access-Token” for this. For Standard Edition there is a single token for both connection types, while Enterprise Edition maintains a list of applications, and generates UUIDs as token for each. The cluster-internal port uses yet another token, which needs to be configured in the properties file using “axoniq.axonserver.internal-token”.

A separate kind of authentication possible is using username and password, which works only for the HTTP port. This is generally used for the UI, which shows a login screen if enabled, but it can also be used for REST calls using BASIC authentication:

$ curl -u admin:test http://localhost:8024/v1/public/users

The CLI is also a kind of client application, but only through the REST API. As we saw earlier you can use the token to connect when access control is enabled, but if you try this with Axon Server EE, you will notice that this road is closed. The reason is the replacement of the single, system-wide token with the application specific tokens. Actually, there still is a token for the CLI, but it is now local per node and generated by Axon Server, and it is stored in a file called “security/.token”, relative to the node’s working directory. We also encountered this file when we looked at providing the token to the CLI. We will get back to this in part two, when we look at Docker and Kubernetes, and introduce a secret for it.

End of part one

This ends the first installment of this series on running Axon Server. In part two we will be moving to Docker, docker-compose, and Kubernetes, and have some fun with the differences they bring us concerning volume management. See you next time!

About the Author

Bert Laverman is a Senior Software Architect and Developer at AxonIQ, with over 25 years of experience, the last years mainly with Java. He was co-founder of the Internet Access Foundation, a nonprofit organization which was instrumental in unlocking the north and east of the Netherlands to the Internet. Starting as developer in the nineties he moved to Software Architecture, with a short stint as strategic consultant and Enterprise Architect at an insurer. Now at AxonIQ, the startup behind the Axon Framework and Axon Server, he works on the development of its products, with a focus on Software Architecture and DevOps, as well as helping customers.

Rate this Article