BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts The Evolution of Evolutionary Architecture with Rebecca Parsons

The Evolution of Evolutionary Architecture with Rebecca Parsons

Bookmarks

In Evolutionary Architectures, the book she co-authored, Dr. Rebecca Parsons described the principles and practices that allow architecture to evolve. In this episode of the podcast, we talk about those principles, how they’ve changed between the first and second editions of the book, and what changes we might see in the next few years.

Key Takeaways

  • Evolutionary architecture is a guided approach that takes into account that a system will change, but we cannot predict when or how it will change.
  • A system is evolvable if you can easily understand it and if you can safely change it. However, being safe to change does not necessarily mean it will be easy to change.
  • The single most important feature of a fitness function is that two people will never disagree on whether or not it passes.
  • Microservices are not required–a monolith can be evolvable, but it must be well-structured around clearly defined bounded contexts.
  • Conway’s Law helps us understand how to align people to the right objectives and also shows the importance of good communication and shared understanding between teams.

Transcript

Intro

Thomas Betts: Hello and welcome to the InfoQ podcast. I'm Thomas Betts, and today I'm joined by Rebecca Parsons. Dr. Parsons is ThoughtWorks' CTO. She has more years of experience than she'd like to admit in technology and large scale software development. She's co-author of the book, Building Evolutionary Architectures, the second edition of which was published late last year. A major premise underlying evolutionary architecture is that not only will things change, but we cannot predict how they will change. While that premise makes predicting anything problematic at best, today, we want to discuss some ways that the principles and practices of evolutionary architecture have evolved since the first edition of the book about six years ago, and what changes we might see in the near future, say the next two to five years. Rebecca, welcome to the InfoQ Podcast.

Rebecca Parsons: Thanks for having me, Thomas.

What is Evolutionary Architecture? [01:28]

Thomas Betts: So at QCon London, you gave several presentations. The one I'm most interested in today is discussing how you think evolutionary architecture will evolve. Now, before we can talk about how it will evolve, the first question I have to ask is, what is evolutionary architecture?

Rebecca Parsons: Evolutionary architecture is really a way of thinking about architecture that allows us to take into account change when we don't know where that change might be coming from. So often people see architecture as this rock, this foundation that can't be changed, but realistically that just doesn't work anymore with the rate of change. And so our definition for evolutionary architecture is it's a guided approach, and guided means that we actually specify what constitutes good for our architecture, and we use that definition to guide our architectural decisions. And it's multidimensional because obviously we've got all kinds of -ilities out there that we have to worry about. And architects' most favorite and least favorite word is trade off, because you can't maximize everything or optimize for everything, but it's also incremental. And so some people will say, well, why don't you just call it agile architecture? But there's really more to it.

This key notion of evolutionary is very closely tied to our notion of fitness functions and how we say these are the characteristics that matter the most for this particular system. Because unlike good code and bad code, where bad code is bad code, whether it's retail or financial services or healthcare, but a good architecture in one setting is actually a bad architecture in another setting because they're going to have different kinds of constraints. And so we really wanted to emphasize this notion of fitness functions and we are going to make sure that our architecture continues to reflect what we see as the critical characteristics.

Thomas Betts: And are those static or are those fitness functions, those characteristics, are they able to evolve over time, and is that part of the evolutionary approach?

Rebecca Parsons: Yes, they definitely need to evolve over time because business requirements change, consumer expectations change, and the fundamental technological changes might make something possible that wasn't possible before or undesirable in a way that it wasn't before. And so if you don't evolve your fitness functions, if you're not continuing to reexamine this, not on a daily basis, but probably on a quarterly or a biannual basis on what's going on in the ecosystem, how are things changing and how does that potentially change your definition of what constitutes good for that particular system?

Thomas Betts: I like the idea of what is good today might not be what's good tomorrow. It's easy to see this system versus that system. You have two different characteristics, but the fact that your requirements, you said the business requirements, the user expectations, the load, whatever it is you might design for one thing today and having to find out when that changes and continually looking at those design decisions you made. And were they right? And they were right at the time based on those requirements we had, but those requirements have changed.

Rebecca Parsons: Exactly. And the other thing it allows you to do is take into account things that you've learned about your domain that maybe you didn't know before. I mean, that's one of the things I like about some of the early descriptions of technical debt is you can have inadvertent technical debt, not because you made a mistake, but because of things that you've learned about the domain. We'd all like to be able to reimplement every system we've implemented after we've learned all of the things that we eventually learned about it in. That's much the same way with our architectural characteristics as well.

Architect and develop for evolvability [05:18]

Thomas Betts: And so, one of the core principles of evolutionary architecture is to architect and develop for that evolvability. How did you define evolvability when you wrote the book and has that definition had to change over the last few years?

Rebecca Parsons: The definition really hasn't had to change because it's really more a change in the tools that we have available to us to a system's evolvability. And at its core, a system is only evolvable if you can easily understand it and you can safely change it.

Now notice I didn't say that it was easy to change, and that's where people sometimes got hung up because they want this to be about making it easy to make whatever change is necessary, but we can't do that because we don't know what kind of change it's going to be. But what we can do is make it as easy as possible to understand how the system is currently working, what I have to do to make the change that I want to make as a result of whatever these change conditions are, and that I can do that as safely as possible.

And so the safety factor comes in with things like the disciplines around continuous delivery, making sure that your deployments are as risk-free as possible, making sure you've got a comprehensive testing safety net so that you're able to interrogate, did this change, do for me what I expected to do? Did it do anything that I wasn't expecting it to do?

But those are really the two characteristics of evolvability and when you look at that next layer down of architecting and developing for evolvability, it's getting at one of those two things.

Architecting for testability [06:57]

Thomas Betts: You had a lot that you covered there, and I think we'll break it up and come back to some of the things. I wanted to start with the idea that the system isn't easy to change. And in some ways making changes in software is easy, like changing hardware is difficult, but changing software, it's meant to be easy to change. The challenge is making changes safely, like you said, that you need to be able to know that what you change isn't going to affect and break something else, or you're changing in a way that you know what the expected behavior is now going to be. And you mentioned testability and just having a good testing safety net. What do you have to do to make sure that you are, I guess, architecting for that testability?

Rebecca Parsons: Well, one of the interesting things that we have noticed over the years is that if you think about how testable something is, whether it be a method or a component or a system, you tend to end up with the kinds of architectures and designs that make it easy to move things around and easy to change things.

One of the examples that we often use is the whole dumb pipes and smart endpoints. Many of these middleware vendors, they want you to put all kinds of business logic on those pipes. And it's actually quite difficult to test. It's much easier. We've got all this tooling and all this methodology and understanding of how to test code and code sits in those endpoints. And so if you do that, you keep the business logic off the pipe and if you keep the business logic off the pipe, then you can move the pipe. If you've got logic there, that pipe is now tied to those endpoints, so you can't move it without doing a whole lot more work.

And another example which gets back not just to testability but also architecting for evolvability is if you've got a test name, do this and this and this and this and possibly this, and then as a result, maybe one of these things will happen, that's probably not a good scope for whatever the thing is. You don't have a clear notion of what you want that thing to do. If it takes that many words to describe what the test does, you don't have a clear enough concept yet of what it is that you want that software to do. And getting back to the ability to safely change. If you don't have a clear concept of what it's supposed to do, how are you supposed to really be able to wrap your head around how you might want to change that thing in the future?

Thomas Betts: I like the here's the test and the test says, do this one thing I can say, and somebody reads the test and they know what's supposed to happen. When you get to that this and this and this and maybe something else, the person reading that next week, six months from now, a year from now and saying, well, I need to make sure that this still works. That's hard to know what it does. I think people are going to be more likely to say, I don't understand this test, I'll just remove it, than to figure out what it does. And that's really dangerous.

Rebecca Parsons: Exactly.

Postel’s Law and unexpected use cases [09:42]

Thomas Betts: I think you also mentioned in your talk that one of the things that hadn't changed so much over the years and is probably going to stick around is Postel's Law. That you want to be liberal in what you accept and conservative in what you send out. Does that kind of tie into that idea of dumb pipes and smart endpoints?

Rebecca Parsons: It ties in a little bit, but fundamentally, as soon as you put something out over whatever channel, if you've lost control of who can see that, I mean if that endpoint is locked down and they have to come to you and beg your permission to be able to access that, okay, you still have control. But in general though, we don't know who's going to make use of that. And so as soon as you put something out there, you are coupled, but you're coupled to things that you don't necessarily understand.

I remember one client we had many years ago, they did absolutely the right thing about buying an off-the-shelf product. They changed all their business processes to align with what the product wanted them to do, and they thought, great, now we can upgrade. We got this covered. And then the next major version came out and they realized, unbeknownst to them there were 87 different reports that were mission-critical, that were directly accessing their database. And so now before they could do their upgrade, they had to change all of those other 87 things and figure out how do we get that to work? And they didn't even know it, but the database was out there. Okay, well, it's a whole lot easier to just connect to the database than actually having to go talk to those people, so I'll just connect to the database. And they had done everything except they forgot the fact that they had lost control over that access.

Thomas Betts: And then it goes back to that first point of you know things will change, but it's very difficult to predict how they're going to change. You can't predict how people are going to use your system once you put something out there.

Rebecca Parsons: Exactly.

Thomas Betts: And so you've now got those dependencies that you didn't plan for.

Rebecca Parsons: Yep. It's like us technologists are problem solvers. So we see the problem, this is our solution to the problem and this is what people are going to do with it. Part of how things go so horribly wrong with technology is when people see something and it's like, "Ooh, I can make this do that." Well, you never expected anybody to do that with your product, but it does it. And we're actually very bad at predicting those kinds of things, and that's why we have to be able to adapt as people are using our technology in different ways.

Thomas Betts: There's a great XKCD comic about the can you please put the feature back in because it was warming up my CPU and I was keeping my coffee warm or something like that.

Rebecca Parsons: Yeah.

Thomas Betts: We fixed this bug. Oh, I needed that bug. And that's the classic case of we fixed the problem and it turns out that bug was a feature for somebody.

Rebecca Parsons: Exactly.

Testing fitness functions [12:26]

Thomas Betts: We're talking about testability, but also those fitness functions. Is there tooling that we can use that helps us figure out, how do I know that I've satisfied my fitness functions? How do I test my system to say it is still meeting my architectural requirements as I had them defined?

Rebecca Parsons: Well, a lot of that depends on the type of fitness function. And one of the things that we lay out in the book are the different categories. Some are static, and so you just put that particular fitness function into your build and it would run like any other thing. Some of them are dynamic, and so they might be more monitoring based. Some are very specific to a particular -ility, so you might run a cyclomatic complexity test. But you might also have something that is simultaneously looking at response time and cache staleness and trying to find a balance for that. Probably the most general of the fitness functions is the Simian Army. There's a whole load of things that are running in production, keeping an eye on different things and reporting or just kicking the thing out of production if it doesn't satisfy it.

So some of these will be breaking, some of these will be maybe trends that you're watching. Some of these might just be alerts to an operation staff. So it really depends on what the -ility is that you're looking at and what kinds of things are possible.

But what we try to remind people of is everybody has used a fitness function at some time or another. If you've run a linter, if you've run a performance test, if you've monitored the utilization of the CPU in production, those are all fitness functions. The value is by unifying everything under this name, we can talk about the meta characteristics of fitness functions, and then you get into the individual details depending on what it is that you're trying to measure

Thomas Betts: Yeah, I think the challenge people usually have with the -ilities is some of them are very qualitative. I want the system to be fast. I want it to respond, and turning those into quantitative values. And then once you have some quantitative metric you can measure against, great. How do we test that the system is still meeting that? Are we hitting our SLOs and SLAs? Are we having too many alerts go through? There are things you can measure, but it does take a lot of effort to think about that and figure out how can I say this is important enough to make sure that we're still testing for it on a regular basis.

Rebecca Parsons: Yes. My personal favorite is maintainability. You can't write a fitness function for that, but you can write something that says that this respects the layering of the architecture that we've agreed to, and there's quite good tooling in many of the mainstream languages that will allow you to do that. You can put in things around cyclolmatic complexity.

There re all kinds of different ways that you can quantify those things, but it does take work, and it's a whole lot easier for me as the enterprise architect to say, you need to write this system maintainability Thomas, and then you can come to me and, okay, I've given you a maintainable system. No, you haven't. Come back and give me another one.

The onus should be on the person who is stating the requirement to say what the requirement actually means. And we don't always do that. And that's why we say the single most important feature of a fitness function is that you and I will never disagree on whether or not it passes.

It has to be that well-defined. And if it's that well-defined, then you, as the developer, know what it is that I am asking you to do, and you can work until you've satisfied that particular requirement as opposed to having to try to, okay, I'm up to ESP303 now, so I'm sure I'll be able to figure out what the guy wants this time.

But that is something people struggle with. And one of the things that we tried to address in the second edition was we added many more examples of fitness functions to make that idea more concrete, and we crowdsourced a lot of them from people within our network, people within the company, and all of the examples in there are concrete examples that different people have used on projects to start to answer that question, how do I go from this thing that I want to achieve to a fitness function that tells me whether or not I'm achieving it?

Thomas Betts: I think one of the other quotes from your track at QCon was that every software problem is fundamentally a communications problem. And when two people read a sentence and disagree about that sentence, that's the communication problem. And so as long as those two people can absolutely agree and you can bring over a third person and they can also agree on it, that's the big solution. That's amazing.

Rebecca Parsons: Exactly.

Thomas Betts: But that's not easy to do.

Rebecca Parsons: Nope.

Thomas Betts: None of this stuff is for free. You have to work on all these things and there are benefits when you can get that agreement.

Rebecca Parsons: Yes.

The evolution of data [17:03]

Thomas Betts: So one thing that we've seen evolve a lot over the past decade is how we handle data. I started this software stuff years ago, and you only stored your data in one giant database. You wanted to add new functions, you added more data to the same database. Now we have lots of microservices and each microservice owns their own data and has their own data store behind it. So we went from one database to dozens or hundreds. Was that something that was addressed in the evolution architecture book or is that something that you're now addressing and seeing differently as a problem you have to solve separately?

Rebecca Parsons: No, it's definitely a big part of it and a big contributor to that. And in fact, you can think of our chapter as a symbolic link to a book called Refactoring Databases, Evolutionary Database Design. And we keep telling Pramod Sadalage, one of the co-authors that they ought to reissue the book, but just flip the title and subtitle to, Evolutionary Database Design, Refactoring Databases. Because regardless of the kind of data store that you have, as soon as something goes into production with data, if you want to change the way that data is structured, you have to migrate it, and migration is hard. Anybody who's done it knows that it might sound simple on paper, but it never goes the way you want it to go. And so this technique is what allows you to, again, more readily and safely do even large scale database changes by taking the refactoring approach.

And I'm using refactoring in the precise term as opposed to the, oh, I need to refactor my whole system because I'm swimming in technical death. For relational databases, in that book, they identified what are the atomic changes you have to make? And for each one of those big changes you want to make, it's actually a composition of lots of atomic changes. If you break it down to those individual data refactorings and couple that with what changes you need to make to the access logic in the code, and then what's the migration code? You can start to individually uncover those little landmines from August and September of 1984 when this field meant something completely different. And those are the reasons. The kinds of things that cause data migrations to go wrong is you've got all of this data and it doesn't age well, and there's all kinds of spots from history that cause it to change.

And the nice thing about that is as well, that the basic notion in refactoring databases applies regardless. If it's a document database or a graph database, the individual refactorings are very different if you want to migrate a graph database from a relational database. But one of the things I think has been such a strong enabler of many of the architectural innovations that we're seeing now is the fact that we have broken that notion that if you persist data, it must be in the relational database. And the DBAs are the gatekeepers of access to that. We've really been able to rethink our relationship with data as a fundamental part of the system, and I think that's been an incredibly powerful innovation within our enterprise technology landscape.

Thomas Betts: The way you described a series of little atomic steps, that fits well with the idea of evolution, not revolution. Don't just make one big change and hope you can jump all the way over the chasm. How do we make lots of little steps? Because when you look at it, databases are good at doing lots of little things. They do lots of little things. They just give you that nice abstraction, but sometimes you have to go down to what are the little refactorings, and I've been there when you have this copy of the data is in version one, and this copy is in version two, and this one's in version three, and now I want to go to version four. If I had been updating one to two and then two to three the whole way. But if I can't replay those steps, I don't know how to make the jump from one to four.

Rebecca Parsons: Exactly.

Platform engineering [20:55]

Thomas Betts: Like you said, there's a whole other book on databases. We'll be sure to have a link to that. But I want to talk about platform engineering. How has platform engineering and a good engineering platform that helps us with continuous delivery and having all that tooling around knowing what our system looks like, how has that helped? And was that something that we didn't have much when you wrote the first book?

Rebecca Parsons: Well, first we need to disambiguate terms a little bit because I think the story changes depending on what your definition of platform is.

We can take out of scope for this discussion, the platform businesses, Uber and Airbnb and such is the platform business model. That might be the ultimate goal of building a platform, but that's not what we're talking about here.

When we think about, in particular, the lower level developer platforms, maybe you've got capabilities like auditing or logging or single sign-on and things of that nature. And you can actually think of even the continuous delivery pipelines as part of that platform that reduces the friction for the person who is trying to deliver business logic. And some of these things were obviously around. Cloud clearly existed. Many of the SaaS platforms clearly existed in 2017. We understood about continuous delivery and such, but people didn't necessarily think of platforms in a way that would support some of these fundamental changes.

When I first started talking about evolutionary architecture, which was long before the book came out, people would come up afterwards and whisper, "Don't think you're being professionally irresponsible to advocate for evolutionary architecture." And so even during that time, you wanted your developer platform kind of locked down and then people would build things on top of it.

What I think is actually more interesting from an evolutionary architecture perspective is when we go up to that next level of platform, which is where we're delivering fundamental business capabilities, that we then want developers to be able to hook together in different ways to create new products, new services for the users. And that's where some of the architecting for evolvability comes in. Do you understand what this part of the system is trying to achieve? Are you constructing these capabilities from the perspective of the business that you are in?

Because when you think about how someone is going to come to you as a technologist and say, I want you to change this business process to do this. In their head, they're not imagining an SAP system or this and that and the other microservice. They're thinking about customers or they're thinking about products or they're thinking about roots. They're thinking about the concepts that go into what this business does to make money. And so if you have those business capabilities in that platform structured around the kinds of things that people are going to imagine they're going to want to move around, it's going to be a whole lot easier to move them around. Because the blocks that you have available to you are the same blocks that I have in my head that I'm trying to metaphorically move around.

I do think that as we've refined our thinking around these business capability platforms, they are moving much more towards this notion of being able to evolve the system to do new things with the same kind of fundamental building blocks that we have. So I think we have to separate out the two different kinds of platforms in thinking about that.

Thomas Betts: Gotcha. That's a very useful separation. I know thinking about extensibility is saying that we've always had extensibility like, oh, I'm going to add an API and people will consume the API. But now we've got low-code and no-code solutions, and the extensibility that needed a developer to be on the other side of it has been replaced by any business user's going to fire up a little webpage and boom, they're solving a new business problem because you gave them composable small units that they can put together in that new way. So it might not even be that you're building something in your architecture, but you're allowing that overall system to expand. That's where you're saying the business capability platform allows that kind of synthesis.

Rebecca Parsons: Yes. But those things are even useful outside of the context of a low-code no-code platform, because it might actually be a pretty complicated workflow that you're trying to put together here with different kinds of exception handling, but you still are using those fundamental building blocks and are able to reuse them.

Microservices not required – a monolith can be evolvable [25:22]

Thomas Betts: So does that get to the idea of... It sounds like a microservices architecture is the panacea for this, but do you have to have microservices to be able to do evolutionary architecture? Could I do this with a monolith?

Rebecca Parsons: Yes. You can't do it with a big bottle of mud, but a well-structured monolith, although you don't have the individual deployment. We talk about the notion of a quantum, and that's a deployable unit, and microservices of course are a much smaller deployable unit. But if you have a well-structured monolith and you are respecting all of the boundaries, many of the things that you want to be able to do, you can do for an evolutionary architecture, you will be able to move things around. Just because they live in the same monolith, it doesn't mean they're coupled.

Now the problem is it takes a lot more discipline. I remember hearing a talk by Chad Fowler, and he said at one point he decreed that no two microservices could share any part of the technology stack, because he didn't want people to, oh, here, I can just reach over there and use that instead of doing it, which I think is an extreme. I think he was just trying to really make a point, but it's a whole lot harder to do that in a microservices architecture than it is in a monolith, and it can be so tempting.

But if you have good discipline and you've got the right kind of separation there, a well-structured monolith can support an evolutionary approach to architecture. The structure of the monolith does matter. If it's a layered monolith where you've got your data layer in your logical layer in your presentation layer, that gives you a certain level of evolvability, but it doesn't necessarily always help you that much because most business changes are going to affect all of the layers. But if you're structuring up more around these domain-driven design level, bounded context, if that's your unit of organization within the monolith, you're going to have the same level of flexibility in terms of the creation of new functionality as you would in the microservices. You just have a different deployment strategy.

Conway’s Law and the importance of shared understanding [27:33]

Thomas Betts: That comment about not having two microservices be able to reach in and touch the technology that you have to go through the interface makes me think of Conway's Law and that those team structures, the teams can only talk to each other through their services as opposed to, oh, I'm just going to pop on Slack and ask somebody to make this change for me. How does Conway's Law fit into evolutionary architecture?

Rebecca Parsons: It always comes down to the people. And we like to talk about the Inverse Conway Maneuver, where if you want your system to look a particular way, reorganize your team, and that's what will happen. But the important thing really is to make sure that the boundaries between those teams are drawn in a logical way. Because again, going back to that “this and that and the other, and possibly this” test name, if you and I don't understand what's in your remit versus what's in mine, we're not going to have a clean barrier between the component that you write and the component that I write. And so Conway's Law helps us understand how we align people to the right objectives, but it does very clearly have implications for the architecture, because it's the people who are going to create those components. And if the communication is not there, and if the shared understanding isn't there, then the code isn't going to work.

Predicting the future of evolvable architecture [28:55]

Thomas Betts: I think we're going to wrap up. I said that we were going to talk a little bit about the future. We talked about the evolution of edition one to edition two of your book. So evolutionary architecture has obviously evolved some. How do you see it going to evolve in the future? Pull out your crystal ball and let us know what's going to happen.

Rebecca Parsons: Well, the first thing I will say is when Camilla first asked me if I would do the talk, I was thinking to myself, "I can't do this. I can't predict the future." That's the whole premise of the book. Is you can't predict how it will change. But she said 2025, and that's not that far away.

I do think we're going to continue to get a more sophisticated understanding of just how we can use fitness functions to express some of these more difficult architectural characteristics. I do think we are going to see significant advances from the perspective of observability, and with this increased interest in testing and production and rollback and all of that. I think we're going to see evolution not just in our thinking about the fitness functions, but how we can actually implement them and take action as a result of it. I do think we're going to see more productivity from AI-based tools in things like our testing strategies.

We're also doing some preliminary experiments, emphasis on the word preliminary at the moment, to help us with legacy modernization by using some of these AI tools to start to parse out what are the actual data flows through some of these old legacy systems. So I think some of the advances that we're getting in generative AI and AI more broadly are going to help us in dealing with some of these legacy systems.

And then I think we're going to continue to see evolution even the way we think about architectures. From my perspective, continuous delivery has been a great enabler, but the combination of continuous delivery and open source has been a great enabler for things like the microservices architectures and some of the other innovations we're seeing in architecture, because it has allowed us to de-risk deployments.

And I think we're going to continue as we see more hybrid hardware software systems with sensors, IOTs, actuators, smart homes, smart cities, smart factories, I think we're going to see innovation in, okay, well, how do we keep those systems up to date? How do we keep our level of understanding of that system at the appropriate level, and how do we do diagnostics as those estates get larger and larger?

Thomas Betts: Well, I think that's a great place to end. Looking ahead of the future, maybe in a few years, we'll have the chat bot come on and tell us what the next five years will look like.

Rebecca, thanks again for joining me today.

Rebecca Parsons: Thanks Thomas. It's been a pleasure.

Thomas Betts: And listeners, we hope you'll join us again soon for another episode of the InfoQ Podcast.

Links

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT