BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Peter Bourgon on CRDTs, Go at SoundCloud

Peter Bourgon on CRDTs, Go at SoundCloud

Bookmarks
   

1. We’re here at QCon San Francisco 2014, I am sitting here with Peter Bourgon. So, Peter, who are you?

First, thanks for having me, my name is Peter and I am a software developer at SoundCloud working mostly on our infrastructure, distributed systems, large data systems, this kind of thing. Prior to that, I was working on our search products at SoundCloud, and prior to that I did a number of things in a couple of different industries, I was working on network and telephony protocol stuff, I did large scale search for legal and finance teams and I did some work in embedded and mesh networks coming just out of college. So, I guess I’ve been in the industry for about 12 years and I’ve been doing distributed work for most of those years.

   

2. Maybe you can give us a quick overview of what SoundCloud is and what are the big problems with SoundCloud, whether scaling it, or what are the big challenges?

Sure. So, SoundCloud is a consumer website that is the leading audio platform on the web and that means anything from bedroom DJs uploading their mixes to professional interviewers uploading interviews and podcasts and that kind of things all the way to major labels using SoundCloud as a promotion venue for major artists. So, we grew from very humble roots, very artists- and creators-driven site and luckily the market supported our vision and we grew quite rapidly, I guess over the past six or seven years now, now we are definitely in the hundreds of millions of users scale and our architecture has grown, as I think a lot of companies’ architectures have grown, in a very similar path. We started off as a Rails monolithical application and used that to great advantage in the early days so allowing us to iterate very quickly and build new features and all the things everybody says. At some point it starts to break down, when the team size grows, when the scale grows, so we began the process, we were one of the first large companies, I think, who began the process of extracting a monolithic application into multiple services. When we first started it was called SOA- service oriented architecture- and now it’s microservices, I think fundamentally it’s the same thing. So, now we are well on that journey, we are well down that path, the monolith still remains, but it’s getting smaller every day and now it’s a shell of what it used to be and all new development is in the microservice style and very clear division of responsibility and very clear division of service hierarchy and that sort of things. So, that’s where we are today.

   

3. Here at QCon you gave a talk on CRDTs, is that one of the hipster data structures that are in today or what are they?

CRDTs are this thing that I would argue came out of the CAP theorem which was first published in 1999 or 2000; they are a distributed data structure similar to a linked list, or it can also include maps or sets and I guess I would argue they were discovered rather than invented, but the CRDTs are really a collection of properties, properties of operations and if your data type or data structure abides by these properties then interesting things fall out of it. I think academically, I am no academic, but I have scanned the literature at least, the official term is that a CRDT is a type of join-semilattice, which is a way to visualize or a way to structure a bunch of information and a bunch of operations on that information in a way that makes it a really nice fit for eventually consistent data systems, so in CAP terms, AP data systems. A lot of times, I mentioned this in my talk, a lot of times organizations and companies where building data systems that kind of danced around the idea of a CRDT and it was only recently, I think in 2009, that CRDTs became reified and a concept that people could talk about concretely and I think the seminal paper was by Shapiro who is a former Microsoft researcher, and that started a blossoming of conversations and concrete implementations and now CRDTs are everywhere which is great, I am a CRDT evangelist myself, you see them in use in data systems like Riak, you see people talking about them in all sorts of applications not just in data systems but in the web in general; you can picture a network of connected devices all with their own view or portal on the world as indeed a large scale CRDT. And maybe another thing people are familiar with is Google Wave for this now-defunct service that allowed you to collaboratively edit documents, this was based on a type of CRDT back also in 2009 or 2010, and I think the legacy of that continues on today in Google Docs, the collaborative editing feature is based on this eventually consistent model.

   

4. You brought up a scary word in there, lattice, a term from algebra. For some of our audience who are less mathematically inclined, what is a lattice or what does it mean in practice?

So, a CDRT in practical terms is something whose operations abide by certain properties, so if you think of two integers, 1 and 1, and you consider the operation 'add', this operation is for example commutative, meaning 1+2 is the same as 2+1. It’s associative meaning you can group additional operations in different ways, but it’s not idempotent meaning 1 + 1 is not equal to 1. So, this operation does not satisfy the core requirements of a CRDT, but other operations do, for example a set union operating on two sets does. So, CRDT is a thing, any data type that has operations that satisfy these different three core requirements and the abstractions you can build on top of that data type. There is probably a more heady and academic definition based on semilattices with all sorts of notation that I am totally unqualified to talk about, I’ll leave that to our intrepid readers and viewers who want to look up the literature on that topic.

Werner: They can basically use it to build as you said data structures like a table or a list or things like that.

Exactly and the key win for CRDT is that you get to ignore all the problems that are inherent in distributed systems, so network failures, message duplication, out of order arrival, this sort of things. Other systems deal with these problems by building reliability layers on top, like Paxos and other CP systems; CRDTs are, I would argue, the first theoretically sound and robust implementation of an AP system, of an eventually consistent model, that are maybe clear enough for industry to be able to make reasonable implementations of and ship in products and all these good things, so that’s why I am excited about them.

   

5. You brought up the various properties. Why don’t you have to lock this data structure if multiple things access it and how do you get the eventual consistency, what is the secret sauce in there?

Each data type achieves it in a slightly different way, but the idea is to track metadata with each element such that the system as a whole only moves in one direction, so you can imagine if you couple an operation with a time stamp, only those operation time stamp pairs which beat the operations that have already been recorded can effectively mutate the data structure or the data system. So, it’s kind of like maybe building a house and every brick that gets placed has to be placed on top of a foundation. So, it’s a combination of metadata, and it’s a combination of structuring your application code at the layer above in a way that can be expressed in these terms, and that is often the hard part, I find, reconfiguring the way you think about your application domain to be able to be expressed in these terms and as I go on a little bit in my talk I think this is one of the harder things to do, but I also think it’s one of the more rewarding things to do, we have these really nice theoretical properties, this nice CRDT theory that is provably correct and if you can leverage that, you should. A lot of the work that I did in my project and I think a lot of the work that other people will need to do if they want to use CRDTs is to rethink their application domain in terms of the properties that they can leverage.

   

6. Is it true that CRDTs are basically append-only data structures, is that true?

It’s one way to think about it, the simplest CRDTs might be considered that way, so the common abstraction in CP systems is like an append-only log and CRDT is not quite as simple as that, but in some abstract way that is true and there are techniques in practical real world implementations of CRDTs to accommodate for that unbounded growth, there are ideas of garbage collection of so called tombstones, there are other approaches that you can use to deal with that kind of abstraction. But CRDTs, I think, are pretty conceptual and there is not a single reference implementation at least at this point in history, so implementers are free to explore these little areas, these minutiae, and figure out whatever solution works best for their domain. That’s what we did in our implementation at SoundCloud and I think organizations like Basho with products like Riak are really, at least at this stage, opening the doors and allowing you to tune all these knobs and choose whatever implementation makes sense for your specific application.

   

7. How does your implementation at SoundCloud get around these problems?

We have implemented what is basically a LWW-Element-Set (LWW = last writer win) element set with a form of minimized garbage collection built into it. So, it’s a bit technical but the point is that every element that can be in a set, that can be in the data system, it initially will go into the so called visible set, the add set, but when a remove operation comes along, it’s not actually deleted, you don’t reclaim that space, it’s moved to a delete set, so in the worst case if all of your adds are accompanied by deletes or all of your elements are at some point deleted and maybe readded the cost is around 2x storage requirements, but in practice for us our deletes are much less frequent than our adds, so we don’t pay that penalty, at least not at that level.

   

8. So you don’t need a garbage collection step at that point?

We do garbage collection in the sense that each of those sets is bounded and we only have a limited view in the history in order to resolve these conflicts, so once the delete queue gets past a certain size then it’s truncated, so that’s a type of garbage collection, I suppose. But, yes, other systems take other methods, they say, for example, you have threshold latency whereby everything that is older than a certain time window from now which is maybe a second if you are very aggressive or maybe a few minutes if you are not, there is a process which can go through and clear those tombstones away, that’s one option that carries an operational cost [which is not to use it], but there are a lot of strategies for this kind of thing. Different sets actually carry different storage requirements, there are some sets which give you a lot of flexibility at the application layer and the cost is they carry additional metadata in each of the elements, so again a lot of dials to tune, it’s still very young for practical implementations of CRDTs and there are a lot of things to consider.

   

9. In which systems do you use these specifics CRDTs ?

The one that used as kind of the trial balloon, so to speak, was a system that I worked on basically an activity system, you can consider it very similar to the tweet stream on Twitter or the Facebook feed or something like this, at its core is time ordered events, time series data, and the two properties that allowed us to use CRDTs for this application is that each event is very small in terms of data size in the order of bytes, because it’s only a unique identifier, it doesn’t carry too much metadata, and that’s indeed the application, the product that we are building the data system for could be very tightly defined and it is always going to be time ordered series of events, so leveraging these properties that are unique and we only have one sort order, we built the system that we built, and that is powering several product features that meet the description right now, currently the stream which the main page you use when you login with the user account, user profiles, user activities, many other things that you can scroll through and listen to things on.

   

10. Writing the CRDTs yourself, do you think it paid off, I mean the cost of maintaining it, what do you think?

This is a really interesting question. When we got started on these projects, CRDTs were talked about, but there were no good implementations yet, certainly none that were proven in industry or in production, so it was very exciting for me to learn and build a production version of this sort of thing, it was incredibly humbling and also very beneficial to my own knowledge, so in this way it was really great. These days there are production grade implementations in Riak for example and if I did it again starting today I would probably look at that. That said, there is a really interesting tension in our industry between “not invented here” and “off the shelf”, these two forces are always in play and I am totally cognizant of the fact that engineers are maybe often overestimating their own abilities and leaning too heavily on building their own solutions when something off the shelf might make a lot more sense. And that’s fair, it’s always something to consider, but in my personal opinion it seems that as an industry as a whole we’ve swum too hard in the other direction and we are not performing software craftsmanship with the kind of sense of ownership that we should, I think, and with the kind of excitement about our industry that we should be, I think in a lot of cases we are being reduced to effectively data plumbers, picking data system A with message queue B and just kind of wiring them together without understanding really what is happening in the data system and what are the guarantees and the SLAs of the message system. So, you can make good arguments in either direction, but I think it’s good to have something out there which is being run in production, which is vetted and which honestly isn’t that big, you mentioned maintenance and operational costs and that sort of things, every line of code that I wrote, excluding tests, sums up to 1500 or 2000, it’s readable Go, I don’t really think it represents a burden to whoever comes after me and maintain it, and I think that by building it this way and by explaining it to other people in the company I have increased the knowledge, I increased everybody’s abilities and pushed concepts out where they didn’t exist before and I think that’s a net benefit, I think that in some cases it does make sense to build it yourself, sometimes.

   

11. CRDTs are quite new as a concept, I think the original paper is less than 5 years, 10 years old. Do you see any similar kinds of discoveries in the future or what’s on your radar, interesting papers where you maybe say “That is the way to tackle an order of magnitude of growth”?

It’s a really interesting question and it’s always very difficult to predict trends, of course. What I can say is what I see here at QCon which is what are people excited about and what are people talking about and it seems to be a lot of, obviously, service architecture and platform as a service, but getting a bit deeper I see the cutting edge to be scheduling, and scheduling algorithms, scheduling methods, maybe even taking a step back and thinking about how to not just schedule given a container and a pool of resources, but taking a step back how to schedule at the cluster level, how do you schedule clusters of things on other things, there seems to be a lot of interesting discussions, everybody has read the Omega paper at this point, that’s good, but I feel that maybe that is something in that domain is ripe for disruption, if you’ll forgive the term of art, that seems to be where a lot of thought is happening right now; data systems is maybe what you meant though, what’s next in data systems, and that I can’t say, I think we are still a long way away from CRDTs being fully realized and fully explored, so I am looking with excitement at what Basho is doing, I am really curious to see another large organization pick up CRDTs for a mission critical thing and see what production lessons they learn from that experience and see where we stand in another five years maybe.

   

12. You wrote your CRDT implementation yourself with Go, I think, so why Go?

SoundCloud was and is a company that is polyglot, we run a lot of different languages, when I joined we were very polyglot, I think we ran something like a dozen languages in production, since I’ve been there we slimmed down a little bit, so it was like a Cambrian explosion of possibility and now we slimmed down to a few core languages. One of the languages in the Cambrian explosion was Go and it’s one of the few that survived the shrinking and I think for me Go is one of the nicest things to happen in the field of computer engineering since I entered the industry, at least for the things that I like to work on which are highly concurrent, network based applications, network servers and it’s a perfect fit for distributed systems, the concurrency is baked into the language, it focuses on simplicity of philosophy that is really appealing to me, the spec is tiny, you can keep it in your head, there is really no gotchas, they make a point to prevent you from doing monkey-patching in the Ruby world or filling your head with too many ways to do the same thing, so as a result the language is very small, you roll it around in your hand and get a really good sense of it which means that your head is totally free to consider your domain and when you are working in distributed systems and distributed programming there are so many gotchas in the application domain that you can’t really devote mental resources to do, for example, manual memory management or playing with the type system or whatever else you might have to do with another language. So it’s a perfect fit, it survived at SoundCloud for a long time, we were one of the early adopters, we were using the pre-1.0, actually, and every test we’ve put it up to it passed with flying colors. It’s, I’ll go out on a limb now and say, the most performant runtime that we run at SoundCloud by a large margin, like resources per op, it’s by far the most efficient and every experience we’ve had with it has been really positive, so a bright future for Go, for sure.

   

13. [...] Did you have to relearn how to structure an application in Go, is it different from other languages, what do you think?

Werner's full question: InfoQ has a large Java developer audience; looking at Go, I think it doesn’t have classes, which is probably going to freak out our audience right now, how do you write program without classes, I don’t know what your background is, but did you have to relearn how to structure an application in Go, is it different from other languages, what do you think?

My background is mostly C++, that’s where I am coming from, and indeed you have to rewire your brain a little bit to write Go effectively. Rather than classes, rather than the classical OOP inheritance model, Go prefers a composition model which I guess in Java is modeled as a, what is it, an implicit interface, is that right?

Werner: They have interfaces, yes.

Ok. In C++ would be a virtual class or something.

Werner: Yes, interfaces, yes.

So, you don’t inherit from something, you rather compose the behavior or the thing you want into your struct and this works because Go has this concept of an interface, but not an explicit interface, rather an implicit interface, that means you define the behavior you want in terms of methods and then things satisfy it automatically if you implement those methods. So it’s definitely an inversion and if you are used to working the other way around, you have to stop and think about what you are doing and maybe rewire your brain a little bit, but it’s not really that difficult to do, I mean you can go through, Go has excellent documentation, take a tour, a lot of things you can do to quickly get up to speed and usually in SoundCloud’s experience someone with an open mind who starts looking at Go is making productive commits on production projects on the order of less than a week, so it’s definitely a change but there is not enough going on in the language that it’s really a big problem, in my opinion.

   

14. Another interesting concept is of course goroutines, are they your basic tools, you’re slinging goroutines left, right and center or are they like threads in other systems where you just express concurrency?

Goroutines are best thought of as CSP-style lightweight threads or green threads, they are very cheap, they are not like a pthread of something like this, we run plenty of services that launch 10,000 goroutines to accomplish some task, which is great, if you have a problem that can be expressed in a concurrent way it allows you to express it that way, without having to worry too much about overhead like physical concrete restraints imposed by your runtime or your framework or whatever, it allows you to express a concurrent problem in its natural language which is really what you, obviously, want in a large class of problems. That said, concurrency is, I think Andrew Gerrand says it’s there when you need it and it stays out of the way when you don’t, so when you are designing APIs, when you are designing components, typically in Go you design them synchronously without concurrency and then you allow the caller to make them concurrent if they so choose and so that means all of your APIs are blocking more or less and then if you need concurrency at the call site, you can easily stick a 'go' in front of a method invocation or wire up some sort of channel that aggregates all of the things you want and make things concurrent or potentially parallel at that point rather than baking it into your APIs.

   

15. How does a goroutine influence the design in the sense of if you call a method you get a return value, but if you call a goroutine you get a channel, you just plug channels together or what is the design idea there?

I think the big takeaway from goroutines and the whole concurrency model of Go is that it really can’t be bolted on as a library, I guess it’s Go’s opinion, Go’s philosophy is that it has to be baked into the language core because that is the only way you can exploit certain properties of the runtime to make concurrency efficient enough to be able to use them this way. That I think is the major lesson, in terms of how you use it day to day, it depends of course on your problem which I think it’s really nice, there is no prescriptive way of using goroutines, but in general, indeed, putting 'go' in front of a function invocation is the same as putting an ampersand after a shell command, basically, and so by itself, the work is accomplished in the background, but if you need to communicate the results of that work or if you need to send information to those workers in an asynchronous way, then indeed you need to wire up some kind of channel of information and most often that takes the form of a Go channel which you either pass to a function explicitly if it has an asynchronous API or which you construct, call the function and then put the results into the channel in your own wrapper layer if the API is asynchronous or non-concurrent. That’s generally how it works but that’s really an oversimplified view, it’s almost always problem domain specific and there are a lot of idioms that work in different domains, many of them are outlined, there is a website called “Go by Example”, I think, which describes many of them, I find that is a great resource.

   

17. What’s the usual way of doing error handling in Go? Do I have to always check my return values, is there another approach that we haven’t heard of?

There are these mechanisms that are called panic and recover which to all outward appearances look like an exception control flow, so code that divides by zero or causes an out of memory error would idiomatically issue a panic, but panic isn’t an exception which should be caught, it states the state of the program is now undefined and the only logical thing to do really is to crash. There are some exceptions to that rule, as with any rule, the JSON parser is a recursive descent parser that uses this type of control flow to sanely be able to deal with bad input data, but in general when you encounter a panic you should let the program crash and treat it as the bug that it is. This may seem dangerous, but because it is used so rarely it’s conventionally never really a problem, I can say in the 3+ years of using Go in production at SoundCloud we’ve never had a service crash due to an uncaught panic that we didn’t first encounter in a unit or integration test, so in practice it’s not a problem.

   

18. So you say the panic bubbles up through the call stack, does that mean in ends at a goroutine or does it go all the way up to the program?

Yes, it will terminate the goroutine if it’s in a separate goroutine, if that goroutine is the main goroutine, then it kills the program. Does that make sense?

Werner: Yes. Otherwise it just kills that goroutine, that’s it, no harm done.

Exactly. It depends on what the goroutine was doing, I suppose, but yes, you’ll get a nice stack trace on your stderr so you can see what happened, but in general the program just marches on, I think that’s true, I might be mistaken, it might take the whole program down, but I would have to check.

   

19. Well, that’s some homework for you. So, the obvious question everybody asks, Go is a somewhat new language, a language with a smaller community, where do you get your libraries?

The good news is that the standard library is really quite good in the sense that it gives you a lot of core building blocks, there is a production grade HTTP server which can definitely take internet traffic without having to stick an nginx in front of it or something like this, there are JSON parsers, there is OAuth, there are all these lovely productive things, but they are libraries, they are components and if you want to do something in your business domain, in an application domain, it is on you to stich them together. There are enough of these libraries and enough of these components that I personally never felt wanting, there is a great Redis database driver, there is a great MySQL AMQP library, all these things exist. But it does shift the burden a bit to the application developer and I think that, or in fact I know that is on purpose, it’s not the sort of language where you are a data plumber, where you are just picking things off the shelf and wiring their constructors together and then letting your system loose, it forces you to think about what you are doing and what you are building and it forces you to plan ahead and structure your thoughts and your code in a coherent way, it is somehow – I wouldn’t even say less productive- it just sort of more deliberate, and that carries costs, but I think it carries benefits and I think the benefits outbalance the costs.

   

20. When Go first came out it was pitched as a systems language, but then there was this thing that stuck out which was the garbage collector, how can you build a systems language with a garbage collector that every once in a while says hey, stop, I have to clean up here; what’s your experience with that?

This was I think a big problem for the Go team back in the early days, I think they since walked this back, I think the Go team mean something very different when they say systems programming language, I think to them they mean system services, like network server or a file system server or something like this, and I suppose and indeed other people have different ideas like a systems language is a language you can write an operating system in, so I think they have since walked this description back and indeed the Go team has noticed that they have attracted a lot more developers from dynamic languages like Python and Ruby than they were expecting and perhaps a lot fewer from languages like C++. I, definitely, their original message resonated with me I think because of the class of applications I was used to writing in C++, network servers, stuff where the low level memory control wasn’t that useful to me and the sort of ceremony and pomp and circumstance of dealing with very robust type definitions and all the safety things you had to pay attention to, this was just overhead for me. For me, it is a systems language but I understand that a lot of other people may not think that.

   

21. You yourself never had any problems with garbage collection pauses or did you?

It’s definitely something you have to think about especially as your services do more and more qp/s, there are strategies, I should first say it’s continuously getting better with every point release, the garbage collection is getting smarter, I think in 1.4 or 1.5 is going to a fully concurrent model, which is going to be great, there are strategies to create less garbage, obviously, in Go it’s a lot easier than it is in Java because idiomatic Go doesn’t allocate memory, it’s very clear on the page when you see a for loop what is happening, what memory is being put on the stack, what memory is being put on the heap and so it’s a lot easier to detect things that are being created and to eliminate when necessary, there are standard library idioms for creating a pool of resources which can be used to circumvent the garbage collector, but yes, it is something you have to pay attention to, luckily in Go it’s a lot less complex and a lot easier to get your head around than in Java, for example, where the garbage collector is the specter looming in the background.

Werner: And there is only one garbage collector in Go rather than in Java where there is about a dozen by now.

Pluggable garbage collectors, yes. I don’t know if that is going to continue, maybe there will be an option in the future, but certainly nothing on the roadmap now that I am aware of.

   

22. To wrap up, are there any tools that you use with Go, since we talk about garbage collection, to visualize that, tweak that, or other aspects, logging, tracing?

For sure. I guess it’s about using Go in production and operational context and developing Go as someone who is interested in app performance and this sort of thing, the good news here is that Go ships with this incredibly robust and useful tool chain, there is the obvious stuff like the formatter which has strong opinions about how Go code should look and eliminates whole arguments that normally happen, but layering on top of that there are tools like golint and govet which are basically static code analysis with varying degrees of strictness, there is a built in race detector which is incredibly useful, you pass an extra flag to your compile and it will detect when reads and writes are improperly interleaved and warn you on stderr and it prints a really detailed description of what happened, which goroutine was responsible for the write, which one was responsible for the read, this sort of thing, there are bleeding edge tools like oracle, which is a static code analysis you can put into an editor and see what things implement what interfaces, what things call what other things, this is being actively developed, and things on the horizon there is a code tracing tool that Dimitri is working on, I think, where if you instrument your code in the right way you can see a really clear- almost a Zipkin style breakdown- of how all the indications go and where the time is spent. In addition, there are tools that do really good integration with, the Google pprof profiling tools so you got runtime as the program is running, memory dumps, stack and heap analysis, CPU profiling including call graphs, time spent, this whole thing. This is really the innovation of Go, one of the main ones which not just making a better language but making a better language to use and that covers way more than just the grammar, way more than just a feature set, all the ecosystem around it and in this way Go is clearly a language that’s been written for and by professional software engineers and large organizations and that really shows through in every dimension, I think.

   

24. A lot of Go and other things?

In the early days there was Ruby on Rails and we had this Cambrian explosion, as I described, so now I guess there is still a lot of Ruby left over a lot of services, microservices are often written in Ruby, our major product-side services are increasingly written in Scala, there are a lot of reasons for that, there is a lot of mindshare behind it, we have pretty mature shared libraries that we use to do common stuff, so that makes a lot of sense to aggregate around that ecosystem, but on the infrastructure side of the organization and still on some product teams there is a ton of Go and it has clearly proved itself and it’s not going anywhere, I am probably the main Go evangelist at the company so I would love to see Go shift into the product side of the organization and attack some of the current Scala services, I understand there is probably a lot of barriers to that, but I would love to see Go option increase, definitely.

Werner: I guess our audience can look at GitHub and spy some of your source code.

There are some stuff there, yes.

Werner: Is your CRDT implementation there?

Yes, that’s github.com/soundcloud/roshi is that data system; we have a couple of other big open source projects and several other smaller open source projects, so take a look, I would like to think we are pretty good open source citizens.

Werner: Excellent. So, audience, go check this out and thank you, Peter.

Jan 15, 2015

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT