BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Banking on Thousands of Microservices

Banking on Thousands of Microservices

This item in japanese

Bookmarks

Key Takeaways

  • In  the financial services industry, Monzo has been an early adopter of cloud technologies and Kubernetes, fighting teething problems and early outages. But the early investment in Kubernetes and microservices has since paid off immensely.
  • If an organization is not willing or able to provide the necessary investment and support for the latest innovative technologies, it's better to deploy more well-understood systems to maximize the chance of success.
  • Always pay attention to the bridge between infrastructure and apps: a well-defined and easy-to-use interface hides the complexity and burden of the infrastructure and improves the process for engineers.
  • Embrace uniformity as much as possible, building on a stack that everyone contributes to, leading to a higher level of abstraction and faster iteration time.
  • Systems are not built and operated in a vacuum: behaviors in your organization play a huge role in determining the success of the technological architecture, with incidents and outages providing a unique opportunity for analysis and introspection.

In this article, I aim to share some of the practical lessons we have learned while constructing our architecture at Monzo. We will delve into both our successful endeavors and our unfortunate mishaps.

We will discuss the intricacies involved in scaling our systems and developing appropriate tools, enabling engineers to concentrate on delivering the features that our customers crave.

Monzo's Goal

Our objective at Monzo is to democratize access to financial services. With a customer base of 7 million, we understand the importance of streamlining our processes and we have several payment integrations to maintain.

Some of these integrations still rely on FTP file transfers, many with distinct standards, rules, and criteria.

We continuously iterate on these systems to ensure that we can roll out new features to our customers without exposing the underlying complexities and restricting our product offerings.

In September 2022, we became direct participants in the Bacs scheme, which facilitates direct debits and credits in the UK.

Monzo had been integrated with Bacs since 2017, but through a partner who handled the integration on our behalf.

Last year we built the integration directly over the SWIFT network, and we successfully rolled it out to our customers with no disruption.

This example of seamless integration will be relevant throughout this article.

Our Tech Stack

A pivotal decision was to build all our infrastructure and services on top of AWS, which was unprecedented in the financial services industry at the time. While the Financial Conduct Authority was still issuing initial guidance on cloud computing and outsourcing, we were among the first companies to deploy on the cloud. We have a few data centers for payment scheme integration, but our core platform runs on the services we build on top of AWS with minimal computing for message interfacing.

With AWS, we had the necessary infrastructure to run a bank, but we also needed modern software. While pre-built solutions exist, most rely on processing everything on-premise. Monzo aimed to be a modern bank, unburdened by legacy technology, designed to run in the cloud.

Adoption of Microservices

The decision to use microservices was made early on. To build a reliable banking technology, the company needed a dependable system to store money. Initially, services were created to handle the banking ledger, signups, accounts, authentication, and authorization. These services are context-bound and manage their own data. The company used static code generation to marshal data between services, which makes it easier to establish a solid API and semantic contract between entities and how they behave.

Separating entities between different database instances is also easier with this approach. For example, the transaction model has a unique account entity but all the other information lives within the account service. The account service is called using a Remote Procedure Call (RPC) to get full account information.

During the early days of Monzo, before the advent of service meshes, RPC was used over RabbitMQ, which was responsible for load balancing and deliverability of messages, with a request queue and a reply queue.

[Click on the image to view full-size]

Figure 1: Rabbit MQ in Monzo’s early days

Today, Monzo uses HTTP requests: when a customer makes a payment with their card, multiple services get involved in real-time to decide whether the payment should be accepted or declined. These services come from different teams, such as the payments team, the financial crime domain team, and the ledger team.

[Click on the image to view full-size]

Figure 2: A customer paying for a product with a card

Monzo doesn't want to build separate account and ledger abstractions for each payment scheme, so many of the services and abstractions need to be agnostic and able to scale independently to handle different payment integrations.

Cassandra as a Core Database

We made the decision early on to use Cassandra as our main database for services, with each service operating under its own keyspace. This strict isolation between keyspaces meant that a service could not directly read data from another service.

[Click on the image to view full-size]

Figure 3: Cassandra at Monzo

Cassandra is an open-source NoSQL database that distributes data across multiple nodes based on partitioning and replication, allowing for dynamic growth and shrinking of the cluster. It uses timestamps and quorum-based reads to provide stronger consistency, making it an eventually consistent system with last-write wins semantics.

Monzo set a replication factor of 3 for the account keyspace and defined a query with a local quorum to reach out to the three nodes owning the data and return when the majority of nodes agreed on the data. This approach allowed for a more powerful and scalable database, with fewer issues and better consistency.

In order to distribute data evenly across nodes and prevent hot partitions, it's important to choose a good partitioning key for your data. However, finding the right partitioning key can be challenging as you need to balance fast access with avoiding duplication of data across different tables. Cassandra is well-suited for this task, as it allows for efficient and inexpensive data writing.

Iterating over the entire dataset in Cassandra can be expensive and transactions are also lacking. To work around these limitations, engineers must be trained to model data differently and adopt patterns like canonical and index tables: data is written in reverse order to these tables, first to the index tables, and then to the canonical table, ensuring that the writes are fully complete.

For example, when adding a point of interest to a hotel, the data would first be written to the pois_by_hotel table, then to the hotels_by_poi table, and finally to the hotels table as the canonical table.

[Click on the image to view full-size]

Figure 4: Hotel example, with the hard-to-read point of interests table

Migration to Kubernetes

Although scalability is beneficial, it also brings complexity and requires learning how to write data reliably. To mitigate this, we provide abstractions and autogenerated code for our engineers. To ensure highly available services and data storage, we utilize Kubernetes since 2016. Although it was still in its early releases, we saw its potential as an open-source orchestrator for application development and operations. We had to become proficient in operating Kubernetes, as managed offerings and comprehensive documentation were unavailable at the time, but our expertise in Kubernetes has since paid off immensely.

In mid-2016, the decision was made to switch to HTTP and use Linkerd for service discovery and routing. This improved load balancing and resiliency properties, especially in the event of a slow or unreliable service instance.

However, there were some problems, such as the outage experienced in 2017 when an interaction between Kubernetes and etcd caused service discovery to fail, leaving no healthy endpoints. This is an example of teething problems that arise with emerging and maturing technology. There are many stories of similar issues on k8s.af, a valuable resource for teams running Kubernetes at scale. Rather than seeing these outages as reasons to avoid Kubernetes, they should be viewed as learning opportunities.

We initially made tech choices for a small team, but later scaled to 300 engineers, 2500 microservices, and hundreds of daily deployments. To manage that, we have separate services and data boundaries and our platform team provides infrastructure and best practices embedded in core abstractions, letting engineers focus on business logic.

[Click on the image to view full-size]

Figure 5: Shared Core Library Layer

We use uniform templates and shared libraries for data marshaling, HTTP servers, and metrics, providing logging, and tracing by default.

The Observability Stack

Monzo uses various open-source tools for their observability stacks such as Prometheus, Grafana, OpenTelemetry, and Elasticsearch. We heavily invest in collecting telemetry data from our services and infrastructure, with over 25 million metric samples and hundreds of thousands of spans being scraped at any one point. Every new service that comes online immediately generates thousands of metrics, which engineers can view on templated dashboards. These dashboards also feed into automated alerts, which are routed to the appropriate team.

For example, the company used telemetry data to optimize the performance of the new customer feature Get Paid Early. When the new option caused a spike in load, we had issues with service dependencies becoming part of the hot path and not being provisioned to handle the load. We couldn't statically encode this information because it continuously shifted, and autoscaling wasn't reliable. Instead, we used Prometheus and tracing data to dynamically analyze the services involved in the hot path and scale them appropriately. Thanks to the use of telemetry data, we reduced the human error rate and made the feature self-sufficient.

Abstracting Away Platform Infra

Our company aims to simplify the interaction of engineers with platform infrastructure by abstracting it away from them. We have two reasons for this: engineers should not need to have a deep understanding of Kubernetes and we want to offer a set of opinionated features that we actively support and have a strong grasp on.

Since Kubernetes has a vast range of functionalities, it can be implemented in various ways. Our goal is to provide a higher level of abstraction that can ease the workload for application engineering teams, and minimize our personnel cost in running the platform. Engineers are not required to work with Kubernetes YAML.

If an engineer needs to implement a change, we provide tools that will check the accuracy of their modifications, construct all relevant Docker images in a clean environment, generate all Kubernetes manifests, and deploy everything.

[Click on the image to view full-size]

Figure 6: How an engineer deploys a change

We are currently undertaking a major project to move our Kubernetes infrastructure from our self-hosted platform to Amazon EKS, and this transition has also been made seamless by our deployment pipeline.

If you're interested in learning more about our deployment approach, code generation, and our service catalog, I gave a talk at QCon London 2022 where I discussed the tools we have developed, as well as our philosophy towards the developer experience.

Embracing Failure Modes of Distributed Systems

The team recognizes that distributed systems are prone to failure and that it is important to acknowledge and accept it. In the case of a write operation, issues may occur and there may be uncertainty as to whether the data has been successfully written.

[Click on the image to view full-size]

Figure 7: Handling failures on Cassandra

This can result in inconsistencies when reading the data from different nodes, which can be problematic for a banking service that requires consistency. To address this issue, the team has been using a separate service running continuously in the background that is responsible for detecting and resolving inconsistent data states. This service can either flag the issue for further investigation or even automate the correction process. Alternatively, validation checks can be run when there is a user-facing request, but we noticed that this can lead to delays.

[Click on the image to view full-size]

Figure 8: Kafka and the coherence service

Coherence services are beneficial for the communication between infrastructure and services: Monzo uses Kafka clusters and Sarama-based libraries to interact with Kafka. To ensure confidence in updates to these libraries and Sarama, coherence services are continuously run in both staging and production environments. These services utilize the libraries like any other microservice and can identify problems caused by accidental changes to the library or Kafka configuration before they affect production systems.

The Organizational Aspect

Investment in systems and tooling is necessary for engineers to develop and run systems efficiently: the concepts of uniformity and "paved road" ensure consistency and familiarity, preventing the development of unmaintainable services with different designs.

From day one, Monzo focuses on getting new engineers onto the "paved road" by providing a documented process for writing and deploying code and a support structure for asking questions. The onboarding process is defined to establish long-lasting behaviors, ideas, and concepts, as it is difficult to change bad habits later on. Monzo continuously invests in onboarding, even having a "legacy patterns" section to highlight patterns to avoid in newer services.

While automated code modification tools are used for smaller changes, larger changes may require significant human refactoring to conform to new patterns, which takes time to implement across services. To prevent unwanted patterns or behaviors, Monzo uses static analysis checks to identify issues before they are shipped. Before making these checks mandatory, we ensure that the existing codebase is cleaned up to avoid engineers being tripped up by failing checks that are not related to their modifications. This approach ensures a high-quality signal, rather than engineers ignoring the checks. The high friction to bypass these checks is intentional to ensure that the correct behavior is the path of least resistance.

Handling Incidents

In April 2018, TSB, a high-street bank in the UK, underwent a problematic migration project to move customers to a new banking platform. This resulted in customers being unable to access their money for an extended period, which led to TSB receiving a large million fine, nearly £33 million in compensation to customers, and reputational damage. The FCA report on the incident examines both the technological and organizational aspects of the problem, including overly ambitious planning schedules, inadequate testing, and the challenge of balancing development speed with quality. While it may be tempting to solely blame technology for issues, the report emphasizes the importance of examining organizational factors that may have contributed to the outage.

Reflecting on past incidents and projects is highly beneficial in improving operations: Monzo experienced an incident in July 2019, when a configuration error in Cassandra during a scale-up operation forced a stop to all writes and reads to the cluster. This event set off a chain reaction of improvements spanning multiple years to enhance the operational capacity of the database systems. Since then, Monzo has invested in observability, deepening the understanding of Cassandra and other production systems, and we are more confident in all operational matters through runbooks and production practices.

Making the Right Choices

Earlier I mentioned the early technological decisions made by Monzo and the understanding that it wouldn't be an easy ride: over the last seven years, we have had to experiment, build, and troubleshoot through many challenges, and this process continues. If an organization is not willing or able to provide the necessary investment and support for complex systems, this must be taken into consideration when making architectural and technological choices: choosing the latest technology or buzzword without adequate investment is likely to lead to failure. Instead, it is better to choose simpler, more established technology that has a higher chance of success. While some may consider this approach to be boring, it is ultimately a safer and more reliable option.

Conclusions

Teams are always improving tools and raising the level of abstraction. By standardizing on a small set of technological choices and continuously improving these tools and abstractions, engineers can focus on the business problem rather than the underlying infrastructure. It is important to be conscious when systems deviate from the standardized road.

While there's a lot of focus on infrastructure in organizations, such as infrastructure as code, observability, automation, and Terraform, one theme often overlooked is the bridge between infrastructure and software engineers. Engineers don't need to be experts in everything and core patterns can be abstracted away behind a well-defined, tested, documented, and bespoke interface. This approach saves time, promotes uniformity, and embraces best practices for the organization.

Showing different examples of incidents, we highlighted the importance of introspection: while many may have a technical root cause, it's essential to dig deeper and identify any organizational issues that may have contributed. Unfortunately, most post-mortems tend to focus heavily on technical details, neglecting the organizational component.

It's essential to consider the impact of organizational behaviors and incentives on the success or failure of technical architecture. Systems don't exist in isolation and monitoring, and rewarding the operational stability, speed, security, and reliability of the software you build and operate is critical to success.

About the Author

Rate this Article

Adoption
Style

BT