Monzo recently redesigned its data warehouse to support more than 100 teams working on over 12000 dbt models. Introducing a so-called "meshy" approach, Monzo cut warehouse costs by about 40% and improved data delivery speed by 25%.
In the last year, Monzo, a UK-based digital bank, rebuilt its data platform around defined modeling layers, explicitly declared interface models for cross-team data dependencies, and CI-enforced validation of structure, naming, and access patterns. The migration covered thousands of dbt models and introduced hundreds of governed interfaces, reducing redundant queries and recomputation, improving data landing times, and reversing warehouse cost growth.
While each team owns and maintains its own data models, Monzo supports distributed ownership through automated guardrails and shared tooling. Antonia Badarau, Irina Mugford, and Massimo Frangiamore, analytics engineers at Monzo, explain the challenge:
At Monzo, over 100 independent, empowered teams contribute to our data warehouse of 12,000+ dbt models. The health of data is owned across all these teams. That kind of distributed ownership is powerful, but it's also hard to get right at scale. Additionally, as AI-assisted coding becomes the norm and everyone can contribute to production dbt projects, the question becomes: how do we make sure the outputs are still performant, consistent, and high quality?
dbt models are SQL queries that transform raw data into structured datasets, designed as modular, reusable components for building and maintaining data pipelines. Monzo defined three principles for its data architecture: enforce clear standards, formalize data sharing through explicit interfaces, and rely on automation and CI checks to ensure quality over manual review.
The bank structures its data models into four layers: automated landing models that flatten raw events, generated normalized models that represent entities with full history, logical models where business logic combines entities, and presentation models tailored for specific downstream uses.

Source: Monzo blog
The team then enforces consistency through so-called Modelgen, a command-line tool that generates SQL and YAML models from an object definition, and through CI-backed data standards that validate structure, conventions, and best practices. Luke Briscoe, Engineering Director at Monzo Bank, writes:
Scaling data in any fast-growing organisation isn't easy, never mind a bank (...) I'm not aware of many companies that run tooling like this (or at least that publicly talk about it!)
Mateusz Ulas, founder of Expeditious Software, comments:
Treating data interfaces as first-class code is still weirdly rare. Most places I see rely on docs and hope for the best. Wiring standards into CI is what actually lands the improvement.
According to the team, clear data layers, stable interfaces between datasets, and automated checks in CI keep the system consistent, allowing teams to work independently while reducing warehouse costs and processing time.
Monzo enforces data quality and consistency by requiring each model to define a unique key, include freshness tests, run incrementally by default, declare an owning team, provide documentation, and follow strict naming and metadata conventions validated in CI.

Source: Monzo blog
Badarau, Mugford, and Frangiamore add:
We are currently about 30% through a company-wide migration to using these approaches and systems, with a long road ahead of us. Initial results have been encouraging. We’ve seen ~40% cost reduction and ~25% faster landing times in some domains - but it’s early days still.
In a separate article, the engineering team at Monzo describes how it uses multi-task neural networks to learn shared representations of fraud patterns, thereby improving detection of rare and previously unseen behaviors beyond what traditional models can detect. At this year’s QCon London, Suhail Patel showed instead how Monzo has built a developer platform capable of shipping hundreds of changes to production every day.