BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Habito: The Purely Functional Mortgage Broker

Habito: The Purely Functional Mortgage Broker

Bookmarks
43:39

Summary

Will Jones talks about how Habito, the leading digital mortgage broker, benefited from using Haskell, some of the wins and trade-offs that have brought it to where it is today and where it's going next. He also talks about why functional programming is beneficial for large projects, and how it helps especially with migrating the data store.

Bio

Will Jones is a polyglot software engineer and passionate teacher with over eight years' experience building applications, creating products and educating other developers and computer scientists. His passions are deeply embedded in the technologies he is using in his current role as VP Engineering at Habito, such as Haskell, PureScript and event sourcing/CQRS.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Jones: I'm going to talk a bit about Habito, who we are, what we do and, the reason we're all here, how we do it, so obligatory background.

What is Habito? We work in the mortgage space in the UK. Currently in the UK, if you want to take out a mortgage, you've got a couple of options. One option is to go directly to your High Street bank or your digital online bank these days and approach them and see if they will lend you some money to borrow to buy a house. The other option, which about 70% of the market do at this point, is to go through an intermediary or a broker, who will look at the whole market, advise you on what might be appropriate for your circumstances and help you secure a mortgage yourself. That's what Habito are, we're a digital, online mortgage brokerage.

Facts and Figures

Getting a mortgage in the UK is, for the most part, for a lot of people, incredibly stressful. It can be a really long, lengthy process, best case, you might be looking at a couple of weeks and some, it's not uncommon to go into a couple of months. It's compounded by the fact that this is part of an already very stressful process, which is getting a home, and especially if it's your first home, having to enter into a massive financial traction as well as uproot all your possessions, find a place to live. It's really stressful, so we're trying to make that better.

We're completely free, we get paid a commission by the lenders when we help people broker applications. Today, we've brokered over a billion pounds of applications, so we've been live coming up to three years in March. We've grown quite fast, we've about 140 people in total now of which 40 to 45 are engineers. We're at the scale now, we're starting to split out into these cross-functional teams, and that's how we operate, so that's a bit about us.

Old Wounds

I've been at Habito since the beginning. When I came into Habito, I brought with me a load of biases and prejudices. In previous roles, there were things that I hadn't been entirely happy with, working with technology, and I really didn't want to run up against these barriers again, so there are a couple of things I'm going to talk about today. I've really been frustrated in the past with not having a clear universal language, so, you might enter into a discussion, across a few functions, like product and design, and everyone's using the same words, but they're not necessarily talking about the same things. Does an application means something has been submitted, does it mean the user has signed up, they've started filling out data? From the beginning, I really wanted to avoid that because it's really painful, subtle, logic errors, people thinking that what's been released hasn't actually been released really screws with your definition of done, so I was really keen to get rid of that.

Another thing that bothered me is coupled inheritance hierarchy. Previous life, I worked in a company that was heavily internationalized, the way they tried to solve the problem of being able to deploy in different geographies quickly, was to try and have a general notion of what the business looked like at a high level, and then inherit from it at lower levels. A product in one country is just a special version of the abstract product, and if you want to go into a new country, you just do a little derive or inherit and you add the new special features and you're off. The problem we encountered with that was that often there tend to be more differences than similarities, and so before you know it, what you've ended up with is a number of disparate systems that happen to be coupled together by inheritance and it was quite painful.

That was a challenge to deploying new software, but what about when it's actually running? Complex runtime state is definitely something I've been bitten by in the past. How do you debug software and ensure that it lives a long and healthy life? The old adage is that if you write software at your maximum cleverness, then you can't possibly debug it because debugging something requires twice as much understanding as writing it and that was something I was definitely keen to get rid of.

The bane of every aspiring software developer's life is boilerplate code, code that doesn't necessarily add value but has to be written. Earlier on in Habito's life, I was keen to be spending as much time as possible getting the product to MVP and beyond that, making sure as much as possible that we're spending company time, money and people, on solving customer problems, and not writing JSON deserializers or boring text templating engines. Those are some of the things that I was keen not to cut myself with this time.

New Beginnings

On the face of that, what did I try and bring to try and counter those things? In terms of getting a universal language, I think you really have to build into your code base and your architecture from day one. I was really keen to build a really data-driven domain model into Habito, both in its code base and its engineering practice, but also the culture of the company as a whole. I'm a big believer in composition over inheritance. Obviously, there are good use cases for both, but I was quite keen to explore what happens when I lean a bit more heavily on the composition side, so rather than using inheritance to build up larger chunks from big hierarchies, just taking smaller building blocks and gluing them together.

Immutability by default, I'll touch on quite a bit later on. I think a lot of the complexity in debugging software and managing runtime state is the fact that state changes. It's true that we're building more and more complicated, and frankly, amazing tools to help manage those problems, but immutability is potentially one way you can just sidestep them all together, and of course, boilerplate code generation. Specifically, what power can I get from code generation if I'm willing to put a bit more effort in upfront in specifying the problem I want to solve?

Haskell

If you're not familiar with Haskell, it's a general-purpose programming language. It's what we call a purely functional language, which means instead of, in Java or C#, writing down the steps of what you want to be done, you're more describing how things work, like you would in mathematics, so the square of X is X times X, , that kind of thing. It’s strong and statically typed. In languages like Java and C# you can't assign an int to a string after the fact, it goes quite a way beyond that static typing. I'll touch on some of that later, but it's super powerful.

One peculiarity that really separates it from a lot of other languages is it's non-strict, which is to say that if you pass a function to arguments, in a strict language, which is most of the conventional ones out there, like Java or C#, any of those, those arguments will be evaluated before that function is called. In Haskell, this is not necessarily the case, if those arguments are used in the forks of an IF statement, then if the IF statement evaluates to true, you might not evaluate the false one, so it's non-strict, things are only evaluated when they're needed. This can be really valuable and also really painful, so I'll talk a bit about the pain later on. It's an [inaudible 00:07:09], but it's not all crazy. Haskell is a compiled language, you can create binaries like you can in Rust or your fat JARs and we deploy them in doc containers, into AWSes, plenty of vanilla stuff underneath and it works with all the classic up-and-coming infrastructure of the age, so it's definitely production grade in that regard.

Domain Modelling

Start with domain modeling. Here's a statement that I think is pretty common fare at a mortgage brokerage, it might come from a product’s owner, it might just be a conversation's happening. Let's say a transaction is either a purchase or a remortgage. If it's a purchase, then there'll be a deposit and some property value, and if it's a remortgage, then there'll be a remaining balance that needs to be paid off, a monthly repayment that's being used to pay it off, and also, some property value.

How might this look in Haskell? I'll start with the first part, a transaction. I've shortened the names to fit on the slide here, but that's just aesthetic. For instance, here, I'm saying a transaction is either a purchase or a remortgage, so I can quite literally do that in Haskell, I can use the data keyword to introduce a new data type and this data type has two possible constructors, either purchase or remortgage. The pipe works like the logical OR pipe you're familiar with, I assume, so it's either this or this. If it's a purchase, then it has some sub-parts, which is a purchase transaction, and if it's a remortgage, it has a remortgage transaction.

Those sub-parts I can define similarly a purchase transaction has a deposit and a property value. This is just like a struct or a record, which both have types of their own, so you can read this double colon as has the type. Deposit has the type GBP, Great British Pounds and prop value has the same type. Similarly for remortgage, nothing super interesting there. It, it has a balance, some current monthly payment, and a property value. This is Data Types 101, but it's something that, to this day, I just find really attractive about Haskell, I don't really have to say much more than I want to. I could model this in OO language, but I'd probably have to set up either a case class or a shallow inheritance hierarchy. Here, I can just say what I mean, I don't really have to say much more than that. Creating values of these types as well as is quite straight forward, so if I want to create TXN 1 of a purchase transaction, I would say, "Well, it's a purchase transaction, and here are the sub-fields." I believe that gives me a pretty accurate way of saying what I mean, what I want, but like I said, this is just data type, so let's make it a bit richer.

Let's say I am defining a credit policy, so I'm working out whether or not someone is worthy of lending a mortgage application to. I might say "Here's a rule that applies to every applicant in a mortgage application. If they're a buy-to-let customer, they're not eligible for a mortgage and if they're retired or before the mortgage is up, they will have entered retirement." This is a rule I'm planning to execute, it's going to be a function and I'm going to define the type of this function. This is a bit crazy, but I can be quite flexible, it's not really important what this exactly does, but the important thing is that I get quite a rich language to defining in a type, that this applies to applicants and it also requires the current date because I'm going to work out the applicant's age in order to work out whether they're retired, but that's not super important.

Where it gets cool is in the next part, I can say, "Buy-to-let customers are not eligible for a mortgage if..." I'm expressing this by saying, "Given some parameter of the transaction, the scenario is buy-to-let." so it might be residential in this case, if it's buy-to-let. "Reject if" and then I'm just going to drop in my conditions, I'm going to say, "You should reject them if their employment type is retired, or if deriving their age at the end of the term, is greater than their retirement age." I've taken a lot of liberties here to make this look like the domain, but my argument is that if you've got a language that's flexible enough to do that, you should because there's an argument to be made that all your business logic problems are effectively solved by working out what the domain is and then expressing the solutions in that domain, rather than just working with a blunter set of instruments.

Simpler Building Blocks

To be honest, this touches into my second point, we've actually built a domain-specific language, and then express the problem in that, so, effectively, I've created the set of primitives to do this, like given, reject if, and derive, that let me express programs that define credit policies. That's really cool because credit policy is something you really want to get right and you can create large amounts of automated tests and you're running the policy hundreds of times a day on dummy data, but a big part of getting things correct is also specifications. You can find this language in which your product team can express the problem and you can implement it into a language that's very close to that.

Don't get me wrong, we don't have a [inaudible 00:12:07] where some product owners just bashed out this Haskell, but we do have an intermediate step, where they might use a spreadsheet to define using a rigid set of columns, the things they want and we can translate each verb or term in that spreadsheet into a primitive of this language and it can give us a very high degree of confidence that things are going to work. The beautiful thing about this, tying into my second point, is that this is just a big function composition, so I haven't written a compiler or a type checker to do this, I've just expressed this using Haskell functions.

I've got some tricks up my sleeve in order to do this I can define custom operators in Haskell, a bit like C++, they're just normal functions, they just happen to look like symbols and I can also use some fancy types to get things to happen the way I want. These two strings are not actually strings that appear at runtime, they are strings that are visible at compile time to the compiler, so the compiler can use them for things like code generation and other stuff, which means this DSL, this domain specific language, can express a lot without me having to write at all and the bonus feature is that this program is expressed declaratively. I haven't actually written down to evaluate this credit policy, load up the customer's age, find out the transaction scenario, if transaction scenario equals buy-to-let, reject. I've instead declaratively specified that this is the credit policy, this is how it should be thought of.

There's a function that takes this and runs it on a customer, and says, "Yes, they're creditworthy.", but I don't have to do that, there are other things I can do. One thing I could do is when I can run the code or I can analyze it, I could generate a log of how it was run. I don't have to do that by instrumenting some imperative program to log statements out, I can interpret the domain specific language in a different way, so that when I run it, it prints out a log that, "Yes, the scenario was buy-to-let. The primary applicant's date of birth was this, and the result was a reject." In this case, that's really handy for auditing purposes, but for debugging or working out why stuff isn't working the way you want, it's really powerful.

Immutability Everywhere

On to my third point, immutability. Everything I'm saying in this talk is that there's a trade-off. Immutability has lots of pros, often, the most oft-cited one I think is definitely performance and it makes sense as well if, in an [inaudible 00:14:46] setting, it's a really natural thing to be able to do, but hopefully, I can convince you that there are reasons you might want to think about having immutability everywhere. In Haskell immutability is the default, if you update a value, you're not ever immutating that value, you're actually yielding a new value, that's really powerful. In our system, people are entering vast amounts of data through, say, an API layer. That data is all user input, so it's not sanitized, it's super important for us to validate it, both from the standpoint of, "Does it make sense? Is your salary actually a number?" but also, "Is it within an acceptable range?" so you might say "The maximum salary we're willing to consider is £10 million."

Immutability gives you a further guarantee that once you've done that validation, you don't have to worry about it ever again. Once you've passed the API boundary, you can think of that value as being valid for the rest of the lifetime of the program. There's no way that someone can sneakily say "The value is now £10 million plus one," mid-execution flow, and you violate a huge swathe of preconditions in the program that follows. With immutability, you just know it, X equals blah, then it'll be blah for the rest of the program.

An oft-cited one is parallelism and concurrency. If you have multiple threads acting and using shared memory, you can't have any rights race conditions if all the values in that memory are mutable. You still have problems to solve, for sure, about how you coordinate between threads, but at least you're not going to fall into the lost update problem, that's really handy. What I'm generally saying is that it's easier to reason about. If you're trying to step through, either in your head or at runtime with a debugger program, where all the state is by default, immutable, then you don't really have to hold in your head like, "Well, what happened to that variable in this line?" because the word variable simply doesn't exist. They were only values and, "What is the value of blah?" is the only question you have to ask.

For those asking what's the catch, Haskell does allow immutability. If you want the raw performance, there are ways to do immutability. It provides them to you in a way that means you have to acknowledge either that it's unsafe, or it gives you safe ways to talk about mutable structures. In general, you can do a lot with immutability in Haskell. The point I'm going to make now is that immutability isn't something that's specific to a programming language. It's just a concept that, as the paper famously said, changes everything. So why stop there? Let's say you've got your database. Here's a standard relational model that you'd probably expect to find in I imagine over 50% of application data stores. Let's say I've got some accounts, and an account has some authorization credentials like an email, a hash password, and when it was created and whether or not the email's been verified, and I've got some profiles which store customers' personal information. I'm going to play the classic trick of linking them together using a foreign key in the profile table to the account table. The key is [inaudible 00:17:49].

Let's say a customer wants to reset their password. If they go to habito.com/resetpassword, they type in their email, they click a button, and we send them an email. They click the link in that email, they reset their password, and that does two things. It resets the password, but also because we sent them that email, we also know they own that email so if it wasn't verified already, we can now be sure it is. The database after that, say, reset of the other example, might now look like this, where there's a new hash in there, and the verified field has gone from false to true.

That's all well and good, for this use case, that's probably fine, but one thing has been lost here. I have no record of what was there before. I couldn't ask "Well, when did that verification occur?" or, "What was the user's old password?" if I want to track password reuse and things like that. There are natural ways to solve this problem. Let's just add a column, verify that and maybe let's keep a separate record of passwords that have have been reused. But I guess what I'm saying is that, in this model, this is mutating, so you're fundamentally accepting the fact that data is being overwritten, mutated, and you're prepared to lose old versions, but it doesn’t have to be that way

Event Sourcing

As Greg's spoiler alerted, you could event source your data instead, which means, instead of recording the current state of the system, you record all the events that have happened since the conception of the system. If you want to work out what the state it now, you can just replay those events in sequence. The set of events I just talked about might be represented as, there's an account created event that says it was created with this email and password, there's a password changed event and a verification event. Each of these events belongs to the same- we call it -aggregate, because you can aggregate the events to get some whole, so they belong to my account in this case, so they all have my account ID, but they each correspond to a specific version of that aggregate. I's a bit like a commit log, you could probably implement this in Git if you wanted to, I wouldn't necessarily recommend it. This log, this can be thousands, millions of events long, and have loads of different aggregates in it, but the key is, it just contains its history, it's immutable, it's rights only, so it's in the CRUD model, you just lose U and D, there is only creating and reading.

Yes, it's super, super powerful and this is, especially, in finance, a really common pattern, because it gives you this audit trail of what happened, but also because there were plenty of applications where the history's more important than the now. So, your bank account is the classic poster child for event sourcing, you don't log into your bank and expect them to tell you, "You have this much money, but I have no idea how it got to be that way." What you expect really is a transaction history, and yes, they're giving you a current balance now, but that's an optimization. You could just work it out yourself, by replaying all those transactions since you opened your account. However, that would be really inefficient, I'll come on to that in a sec.

Event sourcing is taking immutability and it's stepping outside your language. I'm being a bit zealous up on this stage about Haskell, but you don't have to use Haskell to take advantage of event sourcing, I think it's definitely worth considering, whatever language you're doing. However, if you do pick Haskell or a functional language to use it with, they do make a really nice combination. All that domain modeling capability I was talking about earlier, you can apply in the same setting here, an account is created, and then the password may be changed, the email will be verified. I can just define this quite literally in Haskell, I can say, "An account event, that happens to accounts, is either a created event or a password change, and each of these has different fields or are verified," and so on, it maps quite naturally.

The idea of aggregating events is something that I can map quite naturally into Haskell. Here's a slightly more complicated type. This is the maybe type, and it's parametrized, so if you're familiar with optional or that concept in other languages, this is what Haskell calls maybe. A value of type maybe A is either nothing or just some A, so if you're familiar with optional, it's none or some. I can define the functions that fold events, like aggregate them into these holes, quite naturally, using these concepts. If you want to update an account, let’s say in your hand you might have an account that's like just Will, or you might have nothing because it hasn't been created yet, I'll give you one event, and you'll give me what the account looks like after applying that event. If the event is a password change, then you have some just account in your hand, you can update the account to have that new password in it. If you had nothing in your hand and there was a created event, you would return the new account.

The nice thing is, I can express my logic. If you're familiar with some front end of like a Reactor or Redux reducer, I can express this function that just says, "Here's how you take an account and update it with one event." I don't then have to write a load of logic to work out how to take a stream of events and collapse them into an account because, if you're using a functional language, this is a fairly common paradigm called a Reduce or a Fold. Here there's a function, Fold L, that does this for me, and Fold L, the L is for left, basically says, "If you give me a list of things and a function F, I'll fold F into those things for you." Here in the case of update account over some list of events, update nothing with the first event, then fold update account in with the second event, and then fold it in for the third event, so you're just folding all these events in to get this new aggregate.

You don't have to use Haskell's, use event sourcing, there are plenty of great libraries out there, things that spring to mind, things like Axon or [inaudible 00:23:48] Streams has this vibe. If you're using a language that has these functional concepts built in, they tend to map together really nicely.

Asking Questions

As I said earlier, you don't want to log into your bank account and replay the four million transactions since you opened it in order to work out whether you have any money or not. This is the classic challenge of event sourcing, the immutable log is great from an auditor point of view, but it's not great if you want to ask specific questions. When I come to the site and log in, I don't want have to replay every password change I've ever done to work out what my password is currently, so I can work out whether or not I've given the correct one. That's something that the mutation model has going for it. This table is designed for answering those kinds of queries, because you just say, "What's the password currently?", do the look up, and we're off to the races.

The nice thing is that this is a very rich dataset; so rich, in fact, that I can actually derive this dataset from it. That's what I'm proposing with this arrow, I'm saying, "Let's have our cake and eat it.", so what you end up doing is you end up having two models. We call this the right model because it's source of truth, it's where things get written to, but for any question we want to ask, like, "Can I log this user in with this password? or, "What is the user's current bank balance?" I can project out a read model designed for answering that query. I've created a distributed system here, so there's an eventual consistency problem. If you can get over that, it's very much worth paying because now you can target certain queries with read models that are designed to address their performance needs or the kinds of data they need very flexibly. Anything in the read model doesn't have to be migrated, because you just blow it away and recreate it from this dataset. If I decide I need a new column, verified act, I'll just project the timestamp of the verified event into it, rebuild this whole table, no one needs to know.

You might be wondering where do you do an event store? You can it in Postgres, using the JSON type, and we do, and this looks like a normal Postgres table, but that doesn't have to be true. The point of the read model is that that's just a process, spotting events and projecting them to some model designed for answering questions. That model doesn't have to be in the same database, it doesn't have to be in the same server, it doesn't have to be relational. What if I want to do full-text indexing of users? I could project events to a document storage engine or elastic, or something like that, it's totally fine.

There is an eventual consistency concern because I might change my password here and before this projection makes the update, I will be able to observe an old password here and the new one here. You can get around that, most people make the argument with eventual consistency that it is already there, it's just a bit like the continuous deployment argument made in the keynote, is that you're just bringing the pain forward. If you adopt the separation of your write and your read model, you're basically forcing yourself to deal with the problem that is in your system, just a bit earlier.

It's worth it because this is a trick we actually played not so recently, but sourcing mortgage products is a very big search. Initially, it started as a really massive Postgres query. Over time, that query got really slow, and it became time to think about, "Well, maybe we should do something elastic to do those kinds of query." All we had to do was reproject the product events we already had to a different store. We had to rewrite the query, sure, there was no downtime, because the way we currently work is on every deploy, the read models just get rebuilt. The next deploy just did a blue, green, the new site had an elastic box in it, we started querying that, and we just flipped them over. Deploys can get painful, but I'll talk about that later as well.

Tying it back to why Haskell? Well, the act of projecting events is itself a domain specific language, so, Haskell has a really lovely library called Conduit, which is effectively a domain specific language talking about streaming programs. By streaming a program here, I mean I'm literally funneling all the events from the store, I'm streaming them through some pipeline, which is what I think of the projection as, and out the end pops a read model or anything really, but for most cases, a read model. Here, I have a little combinator called all events, which I say, "Give me all the account events," and it streams them through this pipeline. At the end of the pipeline, I have a sink that says, "Put them in this Postgres table." If I wanted to change that to be elastic, then I can just change the sink.

It's really lovely, it means that someone who's just joined Habito and needs to write a projection, can probably just crib bits and pieces from elsewhere because the main [inaudible 00:28:49] language is quite small. You've just got this pipe, which is supposed to look like Unix pipes which says events go through. It's not too hard to piece together, in fact, if you're wondering what the dot, dot, dots in the middle are, there are loads of things you can piece together to do powerful things with little effort. I might say "Take all the events and pipe them through these logs to Grafana pipe," that just spits out metrics to Grafana or Prometheus, or whatever you want, "so I can get an idea of which projections are seeing the most event traffic." This concurrently combinator might exploit the fact that some projections you can do in parallel. If I'm creating that login table from account events, I don't have to wait until all my login events have been projected before I do yours, they can be done in parallel because they're independent of one another. I could spawn a thread pool, of 10 threads and project all the aggregates in parallel, in a round robin way and so that's what this concurrently combinator can do.

Batching as well. It's not super-efficient in most cases to read one event and the write one elastic document or Postgres row. What you probably want to do in practice is read 500 events, accumulate some stuff in memory, and then generate a big piece of SQL or a bulk upload, HTTP call for elastic, we learned that the hard way. Batched is a combinator that does that, if you just joined Habito on day one, you don't have to dig into the art of batching events. Obviously, this is not all perfect, there are some gotchas, but broadly speaking, if you want to write a projection, you can probably get away with gluing together some bits that already exist and you'll got a load of stuff for free. Because it's just the domain-specific language for expressing solutions to a very fine set of problems.

Boilerplate

This brings now into my last point, which is boilerplate. Specifically, I'll relate this to some of the types and domain modeling stuff I talked about earlier on. If I revisit the transaction examples that I started with, so I've got transaction as purchase or remortgage, and especially with event sourcing, even if I didn't have event sourcing, I'll still have a load of HTTP APIs. There's a pretty large use case in my code base for serializing these to and from JSON and that's code I really don't really want to have to write, especially when your developer sense is tingling and you're like, "This looks so much like a JSON record.” The fact that I'm going to spell that out to the stupid computer is really frustrating, so, can I get around that?

In Haskell, it turns out there are lots of ways you can get around that. One of the most powerful, I think, is Haskell support for generic programming. In Haskell, this means that it turns out that every type in Haskell, modular or special use cases, but basically, mostly practically useful types, can all be represented in a generic way. This slide captures it, every type in Haskell is either a product, i.e., some collection of fields, or is what we call a sub or a disjunction effectively of things it could possibly be. Effectively, you could think of it as a tree. If you want to build a transaction there's a branch, you can either take the purchase branch or the remortgage branch, and then at each of those leaves, you have to provide a set of stuff and every type in Haskell can be drawn as one of those trees.

What that means is that if you can write a program that serializes one of those tree representations to JSON or reads JSON into one of those representations, you can serialize any type because all you need is a function that converts that type into the general representation and then you apply the serializer or vice versa. The only problem then is you have to write that function that converts into the general representation, but not if your compiler knows about it. Haskell's compiler, or its most prominent one, GHC, does know about this. You can say to GHC, the compiler, "I want you to derive some code that will turn a transaction into a generic representation." and it does. Based on that then, I'm going to derive implementations of from JSON and to JSON, functions and methods, via the fact the transaction is generic. It will generate a load of code then, that generates a transaction into a generic reputation, serializes that, and vice versa.

To be honest, this is super cool, but it's not uncommon to see 10 or 12 deriving lines. They could be saving hundreds, thousands of lines of code gen and it's super powerful. If you're familiar with reflection, it's not a million miles from that. In Java you can use Jackson or annotations to talk about how your data types are serialized. The key difference is that reflection is something that's happening at runtime, so at runtime, you have this ability to introspect a type structure and work out how to serialize it. This is all happening at compile time, the compiler actually can talk about the structure of that type, at compile time, and generate the code to do this stuff.

The nice thing about that- granted in Java, you have the Git- is that it means that generated code is then going to go through the optimizer. In a horrifyingly large number of cases, it ends up being the code you would have written by hand, so it's not as perfect as C++'s notion of zero cost abstractions, where you can have some very strong guarantees, but a lot of the time, it is a zero cost abstraction. In fact, there are testing tools you can use that will run the hand-written code, you'll give it the golden hand-written code and the generic derivation, it will run them through the compiler for you, and say "Oh, this isn't the same." , so some libraries that exploit this have that in their test suite to say, "Make sure that we don't lose on the performance." It's really cool, you can generate all this code, you can save yourself hours and hours of time and bugs, and you still get the performance back for free as well.

It doesn't really stop there, JSON is a really common example, there are a few buzz words that follow Haskell around like a cloud of flies, one of them you may have heard of is Lens. Data access, in a language like Haskell, where you have immutability and deeply nested data structures, turns out it's a non-trivial problem to solve. Let's say you wanted to update a transaction's property value. You've observed that both purchase and remortgage transactions have a property value, so it doesn't matter whether you've got a purchase or a remortgage, they've both got one. If this were a Javascript you had an object O, you could just do O.prop val equals five and it would be fine because there are no types. The runtime would just be like, "Yes, if it's a prop val there, we'll change it." and if you add a third transaction type later on, a new transaction that doesn't have a prop val, who cares, we'll just write it, it'll be fine.

Haskell won't let you do that, it would get upset at you and say, "You can't do this because you don't know if there's a prop val there unless you work out unless it's purchase or remo." What you'd end up having to do is case-split and work out whether it's a purchase, and then write the code, blah, blah, blah. Really annoying. However, in the generic representation of these two types, they do have a similarity because it's just a tree, and in the leaf of that tree, there is a prop val key. It's like a hashmap of keys so you can use the generic representation to do this, you can provide a combinator, say, nested field, that takes this type-level string, and does that for you.

What's really cool about this is that if I do add a third type, say, new transaction that doesn't have a prop val, I'll get a type error, and the compiler will be like, "This no longer works." So you're trying to write a program that is broken, I find that much more helpful than my program breaking, telling me that I wrote a broken program. So yes, there's some complexity behind it, but you can write all this stuff in libraries, there's not much magic.

The last thing I'll return to on that front is that I showed you this stuff earlier, and I showed you how, because this program was declarative, I could generate the log. The best thing is I didn't have to write much code to generate that log, I could derive it all using the tricks I've just shown you. This massive blob of JSON from the credit policy, I get it for free. I mean, not free, I had to think carefully about how I built the domain-specific language, but I still spend a lot of time focusing on that language, and then I was able to hook into the tools like generics, and deriving mechanisms, so that when I write programs in that language, I can officially guarantee that I get this JSON out, so you can see how the type-level strings match up, the bits of the combinators matchup, and all that stuff, it's super cool.

It Can’t All Be Good News

Can't all be good news, someone's selling something, there's always a catch. There are some gotchas with using Haskell and event sourcing, and some of the stuff I've talked about. The first one is if you're shoving all this functionality and cleverness into the compiler, then compiling is going to take a bit longer, than if you are just doing a quick type check and some code gen. That is true, so I think a small to medium-sized code base, you're probably not going to notice it too badly, but if you try to scale really quickly, and build a massive Haskell monolith, then your compile time will gently creep up into the point where they're taking 15, 20 minutes, which is really nasty.

I'd like to believe that those compile times are an investment, they've caught hundreds, if not thousands of bugs that never made it to production. The challenge is it's really hard to measure all the bugs you're not seeing, so they just become a pain, you're like, "This 20 minutes, I could be getting back somehow." It is possible to fix it, in the last couple of months we have started rolling out Basil, which is a build tool written by Google that's designed for minimal rebuild and working effectively in monorepos, and has massively improved the performance on that front. It's very much a solvable problem, it's just something you are probably going to have to invest in at some point.

Laziness with a non-strictness, I talked about in the intro. Reasoning about performance can be tricky, a lot of the time, it's not that bad, the compiler is unreasonably good at generating performing code, but you might find that you have some memory leaks here and there because there's an unevaluated piece of data that's not been garbage collected. Sometimes it can be tricky to track those down, the tooling has improved a lot in that regard, but a strict language where everything is just being evaluated when it's used is definitely a lot easier to hold in your head.

On that tooling note, the language ecosystem. Haskell is a much smaller language than Java, Scala, and it certainly does not have the massive corporate backing of a .NET-style entity like C# or F# might. So it's something to be wary about, I personally think this hasn't bitten us that much, because things like Conduit and the libraries for doing principal things like streaming or JSON serialization exist, and they're as good a quality as you'd expect to find in any language. On top of that, you also have languages for doing certain new things which are a bit more on the edge, like property-based testing, so you might have heard of things like Quick Chat and Haskell actually has a class of libraries like that, which I think other languages envy.

I think where it falls down though is, if you wanted to hook into it. On day one, you're trying to do the buy-not-build, you're probably not going to have the SDKs for all the third party components you might be using. You're going to have to write those yourself, we've written a few of those, I don't think it's too painful because you can just generate them. In practice, if they're JSON APIs, you can just generate all that code, but it's not as easy as just going, download the…I was going to say Stripe, but I think Stripe actually has a Haskell SDK now, but there's not as much feature-rich stuff in that.

The question I always get is recruitment and hiring, not usually phrased so politely "Is it tricky to hire people? Have you found it tricky to grow? What is the engineer you hired look like?" In the early days, the first few engineers were all looking for the challenge, and we all had some experience. that was fine. Getting from 3 engineers to 10 was very tricky, because you're looking for people who can come in and hit the ground running, that requires a skillset in Haskell, which obviously, wasn't a massive skillset back then. I think the language has grown massively since then, but also, the problems the company faces have changed, so as you get to a certain size, your problems become more architectural and independent of the language you're using, so splitting up a monolith or thinking about performance and scaling teams. These days we just hire as best as we can, really great engineers, we have a sizable enough team and culture and things we do, like learning and workshops, to try and make sure that everyone knows enough Haskell to get the job done. It's a massive driver for our hiring because people come to Habito because they want to learn about functional languages, so striking that balance has hopefully proven really powerful for us.

Event sourcing, like I said, reprojection, it's great in the sense that you don't have to migrate any data. On every deploy, if you're reprojecting your event store, it gets longer forever, so your deploy time just nosedives. Type checking compilation, you have to invest in stopping that happens, you can stop it, you can copy data that hasn't changed, rather than reprojecting it needlessly, you can take all kinds of shortcuts and optimizations to nip that problem in the bud. We have done, but that's cost us time, and it's constantly evaluating whether or not it's worth it.

We often joke that our BI, our business intelligence, at Habito is better than our product. That's true not because our product is bad, but because event sourcing has given us so much power. You can relearn new things from old data, you can change the face of how you're looking at your business without having to do complex database migrations. I think that cost has been worth it, and I would definitely go through it, but it's not for free in that sense.

Useful Underpinnings

A quick recap of what I got up to. I think Haskell is a great tool if you want to tackle some of the stuff I talked about, like rich data-driven domain models. I think composition, you can very much build a whole system without using more classical tricks like inheritance. Immutability by default changes everything, I don't even think about mutation anymore. It does take a bit of adjustment, but I just think it makes the whole reasoning process so much simpler. Code generation, one of the things we got wrong in the early days of Habito, very much, was we were excited about writing Haskell and we wrote too much of it. I think one of the skills of Haskell is writing as little as possible and getting the compiler to generate as much as possible. So, yes, very much a believer in that.

Questions and Answers

Participant 1: Have you run into any problems with some of the GDPR requirements, like the right to be forgotten and event sourcing? Because if you've got an immutable log and someone says, "Get rid of all my data," how do you get rid of data from an immutable log?

Jones: Yes. I should have phrased as immutable asterisk. GDPR is a bit of a stinker if you have an event log, I think there are two ways to solve it. There's the principled way and there's the way that it will be done by May 28th. I think the principled way is you encrypt your entire event stream, and you decrypt it on demand and if a user exercises their right to be forgotten, you just throw away the keys. Then you don't have to at least tamper with the structure of the log, it's just a portion of it now, effectively remains scrambled, never to be read again. We didn't really have the time to implement that option, so what we do is we break the rules and if a user exercises their right to be forgotten, we just delete parts of the stream.

However, we do write events, logging the IDs of the things we deleted, so there are some things we can't delete due to regulatory requirements. If we advise you on a mortgage, we're required to hold that advice for at least a couple of years. If you draw down, i.e., funds are transferred on a mortgage, we're required to hold that for at least seven so, as part of all deletions, we record effectively the sequence IDs of those events, such that the ones that are left but orphaned, we can retrieve using your email, which we'll keep around for purposes of the regulatory exercise. But it's not ideal, I think given more time and a bit more focus from the engineering team, we'd do it the second way, which would be a lot cleaner, but, yes, it's a compromise.

 

See more presentations with transcripts

 

Recorded at:

May 24, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT