BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Dynamically Reconfigurable Event Driven Systems at QCon NY

Dynamically Reconfigurable Event Driven Systems at QCon NY

Danny Gooverts, CTO at The Glue, presented at QCon New York a solution architecture enabling banks to evolve and follow market trends and needs. The solution combines events based service modeling, in memory data processing grid and Docker based deployments to achieve scalability and exactly once processing.

Gooverts starts by describing the motivating factors for change in banks. Regulators are increasingly creating new policies. New players, such as mobile payment solutions, want to eat their lunch. Customers want better services. On the other hand, bank service’s complexity and constraints make them difficult to change significantly and rapidly.

The time to market is currently years, unacceptable by most current standards. Another challenge is performance and scalability, considering bank services were build a long time ago with branch access in mind, not internet scale.

The concept at the center of the design is called a journey, a term commonly used in the financial sector. A journey is a stateful representation of a business process. A journey refers to both data and business logic associated to that data. For example, a payment is a short-term journey whereas a mortgage is a long-term journey.

An in-memory data grid is used to store data. This choice is motivated by the need for scalibility, availability and performance. Gooverts explains his team chose Apache Ignite. The technology is considered mature, which is an important characteristic for adoption in banks.

The in-memory data-grid, despite being high availability, may loose data in some cases such as a complete shutdown of the whole grid. Since loosing data is a no-go for banks, Maria DB is used as a persistent cache store. The datastore is used in a simple way, storing objects directly in JSON format.

For deployment, a baseline is established with Docker containers, which contains the software required to build a new journey. The base image contains the JVM, Apache Ignite and The Glue framework for event handling. When the container boots, it automatically joins the grid through service discovery and is ready to receive events. The following image shows all the pieces together, with different journeys and different versions of the same journey.

Deployment of a new version of a journey is done in parallel as other versions are running. This constrains the data to be both forward and backward compatible. The differences in the data are handled by schema versioning.

Events are routed through the Apache Ignite grid. The architecture also enables exactly once processing of events. This characteristic is important in this context, as an event in a financial setting cannot be lost and cannot be processed multiple times.

A journey node contains everything required to execute in a standalone setup. This makes it possible to measure the throughtput directly from one journey node and establish if more nodes or performance tuning is necessary, without the need of reproducing the whole production infrastructure.

One notable caveat is linked to moving data as it’s an expensive operation. Correct routing and data co-location must be done accurately for optimal performance. Adding and removing nodes leads to rebalancing, implying data will be moved across the nodes. Consequently, the impact of adding and removing nodes must be considered and may require more planning in comparison to a stateless architecture.

Rate this Article

Adoption
Style

BT