BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Hone Your Tools: Building Effective & Delightful Developer Experiences

Hone Your Tools: Building Effective & Delightful Developer Experiences

Bookmarks
48:10

Summary

Suhail Patel goes through Monzo’s early investment in Developer Tooling, showcasing Monzo’s deployment/release tooling which enables engineers to ship hundreds of times a day with confidence.

Bio

Suhail Patel is a Staff Engineer at Monzo focused on building the Core Platform. His role involves building and maintaining Monzo's infrastructure which spans nearly two thousand microservices and leverages key infrastructure components like Kubernetes, Cassandra, Etcd and more. He focuses specifically in investigating deviant behaviour and ensuring services continue to work reliably.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Patel: I want to talk about curating the best set of tooling to build effective and delightful developer experiences. Here's a connected call graph of every one of our microservices. Each edge represents a service calling another service via the network. We rely heavily on standardization and automated tooling to help build, test, deploy, and monitor, and iterate on each of these services. I want to showcase some of the tools that we've built to help us ship these services consistently and reliably to production.

Background Info

I'm Suhail. I'm one of the staff engineers from the platform group at Monzo. I work on the underlying platform powering the bank. We think of all of the complexities of scaling our infrastructure and building the right set of tools, so engineers in other teams can focus on building all the features that customers desire. For all those who haven't heard about Monzo, we are a fully licensed and regulated UK bank. We have no physical branches. You manage all of your money and finances via our app. At Monzo, our goal is to make money work for everyone. We deal with the complexity to make money management easy for all of you. As I'm sure many will attest, banking is a complex industry, we undertake the complexity in our systems to give you a better experience as a customer. We also have this really nice and striking coral debit cards, which you might have seen around. They glow under UV light.

Monzo Chat

When I joined Monzo in mid-2018, one of the initiatives that was kicking off internally was Monzo chat. We have quite a lot of support folks to be a human point of contact for customer queries for things like replacing a card, reporting a transaction, and much more. We provided functionality within the Monzo app to initiate a chat conversation, similar to what you have in any messaging app. Behind the scenes, the live chat functionality was previously powered by Intercom, which is a software as a service support tool. It was really powerful and feature rich and was integrated deeply into the Monzo experience. Each support person had access to a custom internal browser extension, which gave extra Monzo specific superpowers as a sidebar within Intercom. You can see it on the right there showing the transaction history for this test user directly and seamlessly integrated into the Intercom interface. With the Monzo chat project, we were writing our own in-house chat system and custom support tooling. Originally, I was skeptical. There's plenty of vendors out there that provided this functionality really well with a ton of customizability. It felt like a form of undifferentiated heavy lifting, a side quest that didn't really advance our core mission of providing the best set of banking functionality to our customers.

Initially, the Monzo chat project invested heavily in providing a base level experience wrapped around a custom UI and backend. Messages could be sent back and forth. The support folks could do all of the base level actions they were able to do with Intercom, and the all useful sidebar was integrated right into the experience. A couple years in and on the same foundations, Monzo chat and the UI you see above, which we now call BizOps, has allowed us to do some really innovative things like experiment and integrate machine learning to understand the customer conversation and suggest a few actions to our customer operations staff, which they can pick and choose. Having control of each component in the viewport allows us to provide contextual actions for our customer operations folks. What was previously just a chat product has become much more interactive. If a customer writes in to say that they've lost their card, we can provide a one click action button to order a new card instantly and reassure the customer. We had this really nice UI within the BizOps Hub, and the amazing engineers that worked on it spent a bunch of time early on writing modularity into the system.

Each of the modules is called a weblet, and a weblet forms a task to be conducted within BizOps. The benefit of this modular architecture is that weblets can be developed and deployed independently. Teams aren't blocked on each other, and a team can be responsible for the software lifecycle of their own weblets. This means that a UI and logic components can be customized, stitched together, and hooked up to a backend system. We've adopted the BizOps Hub for all sorts of internal back office tasks, and even things like peer reviewing certain engineering actions and monitoring security vulnerabilities. What was a strategic bet for more efficient customer operations tool has naturally become a centralized company hub for task oriented automation. In my mind, it's one of the coolest products that I've worked with, and a key force multiplier for us as a company.

Monzo Tooling

You're going to see this as a theme. We spent a lot of time building and operating our own tools from scratch, and leveraging open source tools with the deep integrations to help them fit into the Monzo ecosystem. Many of these tools are built with modularity in mind. We have a wide range of tools that we have built and provide to our engineers, things like service query, which analyzes a backend service and extracts information on the various usages of that particular service. Or this droid command that Android engineers have built to help with easier testing and debugging of our Android app during development.

Monzo Command Line Interface

One of the most ubiquitous tools across Monzo is our Monzo command line interface, or the PAN CLI as it's known internally. The Monzo CLI allows engineers to call our backend services, manage service configuration, schedule operations, and interact with some of our infrastructure, and much more. For example, I can type a find command with a user ID, and get all the information relating to that particular user from our various internal APIs. I don't need to go look up what those APIs are, or how they're called, or what parameters are needed. Here, I've used the same find command, but with a merchant ID, and I automatically get information about the merchant. The CLI has all of that knowledge baked in on what IDs are handled by which internal API sources. Engineers add new internal API sources all of the time, and they are integrated automatically with the PAN CLI.

These tools don't just function in isolation, behind the scenes, a lot of machinery kicks in. On our backend, we explicitly authenticate and authorize all of our requests to make sure that only data you are allowed to access for the scope of your work is accessible. We log all of these actions for auditing purposes. Sensitive actions don't just automatically run, they will create a task for review in our BizOps Hub. If you were inclined, you could construct these requests done by the CLI tool by hand. You can find the correct endpoint for the service, get an authentication token, construct the right card request. To this day, I still need to look up the right card syntax, parse the output, and rinse and repeat. Imagine doing that in the heat of the moment when you're debugging a complex incident.

Using the CLI tooling, we have various modules to expose bits of functionality that might constitute a chain of requests. For example, configuration management for our backend microservices is all handled via the PAN CLI. Engineers can set configuration keys and request peer review for sensitive or critical bits of configuration. I see many power users proud of their command line history of adaptable shell commands, however, write a small tool and everyone can contribute and everyone benefits. We have many engineering adjacent folks using the PAN CLI internally because of its ease of use.

Writing Interactive Command Line

Writing one of these interactive command line tools doesn't need to be complicated. Here's a little mini QCon London 2022 CLI tool that I've written. I wanted to see all of the amazing speaker tracks at QCon on offer. I'm using the built in command package within Python 3, which provides a framework for line oriented command interpreters. This gives a fully functioning interactive interpreter in under 10 lines of code. The framework is doing all of the heavy lifting, adding more commands is a matter of adding another do underscore function. It's really neat. Let's add two more commands to get the list of speakers and the entire schedule for the conference. I've hidden away some of the code to deal with HTML parsing of the QCon web page, but we can have a fully functional interactive command line interpreter in tens of lines of code. We have the entire speaker list and schedule accessible easily right from the command line. A sense of competition for the QCon mobile app. If you're interested in the full code for this, you can find it at this link, https://bit.ly/37LLybz. Frameworks like these exist for most programming languages. Some might be built in like the command library for Python 3.

The Shipper CLI

Let's move on to another one of our CLI tools. This is a tool that we call shipper. We deploy hundreds of times per day, every day. Shipper helps orchestrate the build and deployment step, providing a CLI to get code from the engineer's fingertips into production. Engineers will typically develop their change and get it peer reviewed by the owning team for that particular service or system. Once that code is approved and all the various checks pass, they can merge that code into the mainline and use shipper to get it safely into production. Behind the scenes, shipper is orchestrating quite a lot of things. It runs a bunch of security pre-checks, making sure that the engineer has followed all of the process that they need to and all the CI checks have passed. It then brings the service code from GitHub into a clean build environment. It builds the relevant container images, pushes them to a private container registry, and sets up all the relevant Kubernetes manifests. Then kicks off a rolling deployment and monitors that deployment to completion. All of this gives confidence to the engineers that the system is guiding them through a rollout of their change. We abstract away all of the infrastructure complexity of dealing with coordinating deployments, dealing with things like Docker and writing Kubernetes YAML, behind a nice looking CLI tool. We can in the future change how we do things behind the scenes, as long as we maintain the same user experience.

We see the abstraction of infrastructure as a marker for success. Engineers can focus on building the product, knowing that the tooling is taking care of the rest. If you're building CLI tools, consider writing them in a language like Go or Rust, which gives you a Binary Artifact. Being able to ship a binary and not have to worry about Python or Ruby versioning and dependencies, especially for non-engineering folks means there's one less barrier to entry for adoption. There's a large ecosystem for CLI tools in both languages. We use Go for all of our services, so naturally, we write our tools in Go too.

Monzo's Big Bet on Microservices

Monzo has betted heavily on microservices, we have over 2000 microservices running in production. Many of these microservices are small and context bound to a specific function. This allows us to be flexible in scaling our services within our platform, but also within our organization as we grow teams and add more engineers. These services are responsible for the entire operation of the bank. Everything from connecting to payment networks, moving money, maintaining a ledger, fighting fraud and financial crime, and providing world class customer support. We provide all of the APIs to make money management easier, and much more. We integrate with loads of payment providers and facilitators to provide our banking experience, everything from MasterCard, and the Faster Payments scheme, to Apple and Google Pay. The list keeps growing as we expand. We've been at the forefront of initiatives like open banking. We're expanding to the U.S., which means integrations with U.S. payment networks. Each of these integrations is unique and comes with its own set of complexities and scaling challenges. Each one needs to be reliable and scalable based on the usage that we see.

We have such a wide variety of services, you need a way to centralize information about what services exist, what functionality they implement, what team owns them, how critical they are, service dependencies, and even having the ability to cluster services within business specific systems. We're quite fortunate. Early on, we standardized on having a single repository for all of our services. Even so, we were missing a layer of structured metadata encoding all of this information. We had CODEOWNERS defined within GitHub, system and criticality information as part of a service level README, and dependencies tracked via our metric system.

The Backstage Platform

Eighteen months ago, we started looking into Backstage. Backstage is a platform for building developer portals, open sourced by the folks at Spotify. In a nutshell, think of it as building a catalog of all the software you have, and having an interface to surface that catalog. This can include things like libraries, scripts, ML models, and more. For us to build this catalog, each of our microservices and libraries was seeded with a descriptor file. This is a YAML file that lives alongside the service code, which outlines the type of service, the service tier, system and owner information, and much more. This gave us an opportunity to define a canonical source of information for all this metadata that was previously spread across various files and systems. To stop this data from getting out of sync, we have a CI check that checks whether all data sources agree, failing if corrections are needed. This means we can rely on this data being accurate.

We have a component in our platform that slaps up all the descriptor files and populates the Backstage portal with our software catalog. From there, we know all the components that exist. It's like a form of service discovery but for humans. We've spent quite a lot of time customizing Backstage to provide key information that's relevant for our engineers. For example, we showcase the deployment history, service dependencies, documentation, and provide useful links to dashboards and escalation points. We use popular Backstage plugins like TechDocs to get our service level documentation into Backstage. This means all the README files are automatically available and rendered from markdown in a centralized location, which is super useful as an engineer.

One of the features I find the coolest is the excellent score. This is a custom feature that we've developed to help grade each of our services amongst some baseline criteria. We want to nudge engineers in setting up alerts and dashboards where appropriate. We provide nudges and useful information on how to achieve that. It's really satisfying to be able to take a piece of software from a needs improvement score to excellent with clear and actionable steps. In these excellent scores, we want to encourage engineers to have great observability of their services. Within a microservice itself at Monzo, engineers focus on filling in the business logic for their service. Engineers are not rewriting core abstractions like marshaling of data or HTTP servers, or metrics for every single new service that they add. They can rely on a well-defined and tested set of libraries and tooling. All of these shared core layers provide batteries included metrics, logging, and tracing by default.

Observability

Every single Go service using our libraries gets a wealth of metrics and alarms built for free. Engineers can go to a common fully templated dashboard from the minute their new service is deployed, and see information about how long requests are taking, how many database queries are being done, and much more. This also feeds into our alerts. We have automated alerts for all services based on common metrics. Alerts are automatically routed to the right team, which owns that service, thanks to our software catalog feeding into the alerting system. That means we have good visibility and accurate ownership across our entire service graph. Currently, we're rolling out a project to bring automated rollbacks to our deployment system. We can use these service level metrics being ingested into Prometheus to give us an indicator of a service potentially misbehaving at a new revision, and trigger an automated rollback if the error rate spikes. We do this by having gradual rollout tooling, deploying a single replica at a new version of a service, and directing a portion of traffic to that new version, and comparing against our stable version. Then, as we continue to roll out the new version of the service gradually, constantly checking our metrics until we're at 100% rollout. We're only using RPC based metrics right now, but we can potentially add other service specific indicators in the future.

Similarly, we've spent a lot of time on our backend to unify our RPC layer, which every service uses to communicate with each other. This means things like trace IDs are automatically propagated. From there, we can use technologies like OpenTracing and OpenTelemetry, and open source tools like Jaeger to provide rich traces of each service level hop. Our logs are also automatically indexed by trace ID into our centralized logging system, allowing engineers to filter request specific logging, which is super useful across service boundaries. This is important insight for us because a lot of services get involved in critical flows. Take for example, a customer using their card to pay for something. Quite a few distinct services get involved in real time whenever you make a transaction to contribute to the decision on whether a payment should be accepted. We can use tracing information to determine exactly what those services and RPCs were, how long they contributed to the overall decision time, how many database queries were involved, and much more info. This tracing information is coming back full circle back into the development loop. We use tracing information to collect all of the services and code paths involved for important critical processes within the company.

When engineers propose a change to a service, we indicate via an automated comment on their pull request if their code is part of an important path. This indicator gives a useful nudge at development time to consider algorithm complexity and scalability of a change. It's one less bit of information for engineers to mentally retain, especially since this call graph is constantly evolving over time as we continue to add new features and capabilities.

Code Generation

Code generation is another key area we're focusing on. RPCs can be specified in proto files with the protocol buffers language to define the request method, request path, and all the parameters associated with an RPC. We've written generators on top of the protocol buffers compiler to generate the RPC code that embeds within our service HTTP libraries. The definition can include validation checks, which are autogenerated beyond the standard data type inferences you get from protobuf. What this means in the end is each service is usually 500 to 1000 lines of actual business logic, the size that is really understandable for a group of engineers during review.

Earlier on, I talked about the approval system for our Monzo CLI. Many RPCs are sensitive and shouldn't be run without oversight, especially in production, some should be completely forbidden. We encode this information in our service definitions, so these are explicit and also go through peer review. Our authorization systems on the backend check these service definitions each time before a command is executed, verifying and logging each authorized request or generating a review if a request needs secondary approval. We use option fields to add an authorization specific option within the RPC definition. Engineers can specify whether an RPC should require a secondary person to approve. By default, we run with RPC as being completely blacklisted from being called by engineers. This option provides a nice and frictionless way to open up an RPC whilst retaining review, oversight, and security.

We're using the protocol buffers language to generate more than just RPC code. We also generate queue handling code for example. Within the service proto file, engineers can specify an event payload definition, as well as all of the validation criteria. Behind the scenes, we have a protobuf compiler plugin, which generates Kafka consumer and producer logic, handling the marshaling and unmarshaling of the payload, and interacting with our Kafka transport libraries to produce and consume from a topic. Engineers can work with strongly typed objects and autogenerated function signatures, and don't need to worry about the underlying transport. For infrastructure teams, any bug fixes, new features, and performance improvements can be applied with ease.

One of my colleagues taught me a really cool feature of Git a couple of weeks back. When working with generated code, sometimes rebasing your Git branch can cause a merge conflict if someone else has also modified the same generated files. Git has functionality to call a script in the event of a merge conflict to try and resolve the conflict automatically. This is called a git merge driver, and can be specified in the .git extensions directory. Our script runs the protobuf generator on the already rebased proto file, which usually merges cleanly itself. The result is the correct generated code with all of the changes that you expect. It is small things like these that are paper cut into day to day development process. By eliminating this small source of toil, I never need to think about rebase conflicts for generated protobuf code. The behavior is well defined and understood and exactly what I expect. Most important of all, it's automatic.

Static Analysis

At Monzo, the vast majority of microservices are written in Go, and follow a consistent folder and file structure and pattern. We have a service generator that generates all of the boilerplate code in a single command. Being able to assume a consistent structure allows us to write tools that can detect logic bugs, and even rewrite code on the fly. One of my favorite tools is a tool called Semgrep. Semgrep is an open source tool to write static analysis. It's like grep, but understands a wide variety of programming languages. Say for example, you want to find all of the print statements that an engineer might have left behind for debugging. You can write a simple rule without having to go anywhere near the abstract syntax tree format of your programming language.

Static analysis is a powerful method to bring a code base towards a particular convention. Introducing a new rule stops further additions of something you want to maybe deprecate or move away from. You can then track and focus on removing existing usages, then make the rule 100% enforce. If you find a particular code bug being the contributor to an outage, consider adding that as a static analysis rule. It's a great form of documentation and education, as well as automatically catching that same bug in other components on the fly. When writing these tools make them as informative as possible to make fixing issues a delight. Taking some time to write a code modification tool significantly lowers the work needed and can be a real driver for large scale refactoring across the codebase. Making these checks fast will encourage engineers to pitch in too. Many engineers at Monzo run these checks as pre-push hooks, because they get really fast feedback within a couple of seconds. If a check isn't mandatory, but provides a warning anyway, engineers may take that extra few minutes fixing an issue as part of their normal day to day work.

Combining static analysis with continuous integration allows for automation to catch bugs, freeing up engineers to focus more energy on reviewing the core business logic during pull requests. Even though we have over 2000 microservices in a single repository, we can run all of our unit tests and static analysis checks in under 10 minutes. For many changes, it's significantly faster thanks to test caching in Go and some tooling that we've developed internally to analyze our services and determine which services should be tested. It's something that we actively measure and track. Slow build means you ship less often.

Summary

I've spoken quite a bit about the wide variety of tooling that we have. Much of this didn't exist on day one. Some of the work that we've done, such as automated rollbacks, and critical path analysis using traces are improvements that we've rolled out really recently. This investment is ongoing, and it's proportional to the size and scale of our organization. A lot of baseline infrastructure I've mentioned like Semgrep, and Prometheus, and Jaeger is all open source, so you can build on the shoulders of others. Tools like our CLI and shipper, were internally pet projects before they got adopted widely, and broadened to become more flexible.

The tools you decide to build should be focused on automating the practices of your organization. For us, internal tools like shipper allow us to fully automate what would be a complex change management process in a highly regulated banking environment. We don't have a 20 step release procedure that humans have to follow, because all of that is encoded into the tool, and that's what allows us to go fast. By standardizing on a small set of technology choices, and continuously improving these tools and abstractions, we enable engineers to focus on the business problem at hand, rather than on the underlying infrastructure. It's this consistency in how we write software that makes the investment in tooling compound. You may not be able to get onto a single programming language or even a single repository, but implementing concepts like developer portals, static analysis, metrics, and tracing can be universal improvements for your organization.

Organizational Buy-in

There's always something on fire or a product feature that needed to be released yesterday. Time spent on tooling may not be actively visible. One trick I found is to treat improvements in tooling as product releases, build excitement and energy, encourage people to try the new things and even contribute if they find bugs or issues. Do things that product managers do. Invest in developer research and data insights to help justify the investment. Keep the developer experience at the forefront and make your tools delightful.

Excel Spreadsheet as an Application Catalog

Someone did talk about having an Excel spreadsheet as an application catalog. When you're starting off pretty small, or if you've got like a fixed set of services maybe that you can count on, on two hands. That is a really easy way to get started. Some amount of documentation is better than no documentation. Finding out by surprise that a particular service exists or a service dependency, finding that out by surprise during an incident is never nice. Even having that written down is always a good first step. We found Backstage to be really good when you get into the tens to hundreds stage of microservices. We're nearing the 1000-plus mark, with the number of services that we have. We've found Backstage to be really good at that as well. It's a fairly scalable and extensible platform. It's built much more than for cataloging services, so we know people who are using it for machine learning models and things like that. That is the vision that Spotify, who originally open sourced it had to promote, and we really want to explore that a little bit further.

Questions and Answers

Synodinos: Also, regarding the shared functionality, are you sharing it as a library, as a service? Who is the owner of it?

Patel: I think it's a mixture of both. There's a component that gets embedded within the service binary itself. For example, we might embed a client library, and that might be at different levels of functionality. That client library might just be a thin API stub, defined in protobuf, and going over an RPC boundary, or that could be a bit more complex. For example, if you're connecting to a Kafka cluster, or something like that, we might embed a full Kafka library connecting directly into that cluster. I think it's a bit of a mixed functionality. What we have is each of our libraries and those services that are backing them are owned by a particular team. It depends on the team managing that level of infrastructure. For example, our Cassandra libraries, or our database libraries, is handled by a team that manages that set of infrastructure, like a group of folks who are managing the stateful layer, as we like to call it internally. Whereas we have a team that is focused on observability and deployment tooling, that is managing libraries associated with that. They also own the services that are backing that as well. In some cases, they're owning the infrastructure as well.

Synodinos: You guys are structured in the Spotify model: tribes, squads?

Patel: Yes. It's quite a similar model, where we have squads that are managing a centralized set of infrastructure, so that everyone can benefit and we don't have different teams reinventing the wheel.

Synodinos: How do you balance eager development of helpful access tools with privacy concerns in the domain of banking? How much of this is done by policy versus by tooling enforcement?

Patel: It's a really good concern. It depends on having a very strict layer when it comes to accessing data. A lot of stuff is being ferried internally and tokenized from the moment that it comes in. We have a secure definition within the systems that are holding customer level data, or banking level data, or transaction level data on the fact that this needs to be with some particular authorization policies and things like that. We scope the number of people who have access to it. Where it becomes interesting is that there's a whole category of data that doesn't need to fall under like strict rules. For example, if you want to figure out how many transactions were there yesterday in the McDonald's, that point is just aggregated data, for example, and anyone can look at that within the company. There's a balance between being internally transparent and having access to data to do a better job, versus making sure that you respect customer privacy. The two don't need to be at odds. You can encode both things in tooling. For example, being able to tag a particular request or a particular set of fields as being sensitive means the tooling can enforce the fact that not everyone should have access to this. Whereas being able to say, ok, this is aggregated data or this is data that decodes nothing that is private, makes it easy for everyone to have access and gain visibility, and make better decisions. The two don't need to be at odds.

Synodinos: Did you have to customize Backstage at all to fit it in with your processes?

Patel: Yes we did. There was an aspect of Backstage where we didn't have a very strong definition, this was almost like a forcing function for us, so the whole idea of a service catalog. All of the metadata was spread across multiple different systems. We have dashboards in our monitoring system, we have the mapping between services to like what team owns them in CODEOWNERS, and we had a different system for looking up how a team could be contacted, and how they could be alerted if a particular thing was going wrong. We had a few integrations in the backend to map all of that together, but it wasn't in one central place. The fact that we wanted to make this visible all in Backstage, meant it was a forcing function to create a metadata catalog, one system sort of row them all, as they say, like a way to be able to aggregate all of this data.

Also, the most important part was having a way to make sure that it's consistent across all these different systems. For example, sometimes the data is going to be stored in a variety of different systems. We've got configuration for our alerting system. You've got some configuration in PagerDuty, for example, for paging people. You've got configuration within GitHub CODEOWNERS for enforcement at a code level. You want to make sure that when you have like a catalog that is aggregating all of this data, all the different sources agree that the catalog is correct. All the different sources and sinks of data are also correct, because if there is a disagreement, you're going to end up with inconsistency. The moment you end up with inconsistency, rectifying that becomes infinitely harder.

It can actually lead to bad outcomes. For example, like a page being routed to a team that doesn't exist, which might mean it might fall through the cracks, or we might have a significant delay in addressing that page, because it's been escalated to a team that doesn't know how to handle that system right from the get-go. That delayed reaction time can lead to bad consequences. Having that enforced from the get-go was something that we strongly invested in. Once we had the software catalog building this system that could slap that into Backstage was relatively easy, because we actually built the catalog itself to model the attributes that Backstage promotes, around having components and services. We added a few things like tiers, and also the excellent score. Those are the ways that we added Backstage plugins and things like that. The customizability of Backstage is quite a nice selling point. You've got a very nice plugin system, you can write plugins in Node.js. Yes, it's fairly easy to get up and running.

Synodinos: Staying on the topic of Backstage or probably more generally about tooling, you're making a big investment in various tools. How do you deal with deprecated tooling? What if Backstage were to stop active development, would you lock in the final version, maintain it internally, migrate your data to the next new tool?

Patel: I think it depends on the tool of choice. One of the conscious decisions that we make as part of our strategy is to invest in tools that are open source, so if we do want to make further modifications, then we can. Again, that has to be a conscious choice. We do look for active signs, like for example tools being actively maintained by an organization or like a rich community, and actually getting involved in that community. With tools like Backstage, it is part of the CNCF, which means a lot of different members getting involved. A lot of folks have gotten invested into the Backstage tooling, and so there's now been spinoff companies who are like giving you a hosted version of Backstage. There is a rich ecosystem, that's something that we actively look out for. When you make an investment in a tool, you do make a conscious decision to have to maintain it or lock it in. I don't even think it's a question of if, it's a matter of when. When the tool has served its purpose or the folks move on, or there is a better way of going about it, you have to make a conscious choice to support or move on.

One benefit for us is we don't lock in the data. The data within Backstage is quite ephemeral. It's not the destination for any concrete amount of data. Most of the sources of data come from systems that we built internally or, for example, things that we've demonstrated, like the software catalog, so reintegrating those into a different system for visibility becomes a little bit easier. Being locked into a particular scheme of a particular tool is where it gets a little bit complicated because now you got to have a way to mutate or transform your data into a different tool, which might not support all of the same functionality.

Synodinos: It sounds like your teams are structured so that product teams don't need to own the entire DevOps process, but instead rely on many separate teams to cover all capabilities. Is this correct? If so, have you found issues with team dependencies slowing you down?

Patel: The structure that Chris is assuming is correct. We have a centralized team that manages the state of infrastructure. There's a phrase here which might actually be quite poignant, which is, it's like, slow down to speed up. Yes, there are cases where, for example, a team could have moved faster if they had full control of the infrastructure, or if they could make their own infrastructure choices. By building on centralized tooling, there's a lot of things that are abstracted away from them. Let me give a particular example. If every team rolled their own infrastructure, that means that team has to continue to maintain and take that infrastructure through audits, and make sure that it's fully security conscious and everything like that. It's a lot of additional overhead that they'd have to take on as part of responsibility. This responsibility they've been able to amortize because we have a centralized team, and that means we have a centralized group of people who are doing the risk assessments and speaking with the regulator, and staying on top of all the various things that we need to do.

For example, when we have a CVE come in, we can have a team that does patching on one set of infrastructure that everyone is aware of. We have a defined contract on how we would roll that infrastructure. When, for example, Log4j happened, the Log4j vulnerability within Java, we were able to enumerate all of our Java applications that we have running within our platform. Stuff that we run and stuff that is hosted by third parties, and have a centralized team be dedicated to incident response and patching of their systems, because there was a deep understanding and a deep contract within one centralized platform. That expedited our patching process. That applies for many other different types of vulnerabilities as well. There are aspects where teams could move potentially faster, but having a centralized platform means that the whole organization moves faster as a result, which is something that we are prioritizing for. We're not prioritizing local team efficiency, rather, organizational efficiency.

Synodinos: How do you choose the prebuilt tooling you add to the Monzo ecosystem? Do you have a test protocol of the tool choice before it is generalized to all teams?

Patel: Yes and no. We're not too rigid about what tools we choose. We encourage folks to experiment, because the tooling system is forever changing, there's always new tools coming on board, new systems being developed. You've got to have some amount of freedom to be able to experiment. I think where we strike the balance is, we provide a rich staging environment, which is apart from the data representative of production. For example, we have the same infrastructure components. We have the same tools being used. We have the same software being deployed. It's just at a smaller scale and not with production data, which provides a rich sandbox to experiment.

Once the experimentation has yielded results, and folks are happy with the results, and they want to spread out further. By the point that is actually becoming production ready, or we want to make a conscious significant investment in that particular set of tooling or even an infrastructure choice or even a particular design. That's the point where we step back and take something through either an architectural review if it's going to be a large sweeping change, or write a proposal. There's a rich history of proposals written down, which are like decision logs almost or architectural decision records, which document why we've chosen a particular tool, what alternatives we may have looked at. There could be conscious reasons why we've chosen a particular tool over another. For example, we may pick a tool with one of the vendors that we have already onboarded, like a particular tool that is provided by AWS, because it fits nicely within our hosted ecosystem, rather than there might be a more optimal tool provided by Microsoft, which we may not choose because we're not on Azure at the moment. Everything is a conscious, balanced decision.

All the choices we make, folks need to maintain. Something that we actively look out for when we are reviewing these decision logs, like, is that something that we can feasibly maintain for the long run? What is the reversibility cost? For example, if we wanted to change the tool, or change the architecture in the future, is it going to require significant investment, both from a technological point of view but also from a human point of view? If we're encoding, for example, a particular practice with all of our engineers, getting engineers to move on to a new thing might be quite a substantial leap forward. If we define good interfaces and abstract the thing that is backing it away, we could change the backing implementation really easily, and potentially under the covers. Yes, they can have a much easier experience.

Synodinos: Christopher has shared with us that you have been open sourcing some of your libraries, and you demoed a bunch of tools, the PAN CLI, the shipper CLI. What type of open source contribution does Monzo have?

Patel: We have open sourced a lot of our RPC based tooling, and also the things that we use for logging and some of our libraries around Cassandra, and stuff like that, as well. Yes, definitely check out our open source repos at github.com/monzo, if you're interested there. We also have quite a lot of blog posts about how we do things internally. Again, there's some stuff that we can't open source, unfortunately, because it's embedded quite deeply into how we write the services and systems. Those things can't be disentangled with the rest of our platform, but we document them really heavily and we put them out in the open. We do have a blog if you want to see some of our architectural designs.

 

See more presentations with transcripts

 

Recorded at:

Sep 14, 2022

BT