00:36:54 video length
Bio Rebecca Wirfs-Brock invented the way of thinking about objects known as Responsibility-Driven Design. She is lead author of the classic Designing Object-Oriented Software, and Object Design: Roles, Responsibilities and Collaborations. She is the design columnist for IEEE Software and past board member of the Agile Alliance.
I am one of those people who were involved in the early days of object technology. I lived in Portland, Oregon, and worked on SmallTalk. That's where I got into this way of thinking about design and I've been involved in that for a long time. There are some timeless things about design that have lasted over the years and what I find really exciting about this conference has been that where people have taken and values of Agile development have really kicked it up a notch, I think.
My corner - that's an interesting way of talking about it, because I fly around a lot, but I spend time in the design space. We've been working with teams who've been trying to figure out how design and keeping it sustains while they are trying to deliver value - how to do that. It's sometimes a compromise to deliver stuff for the customer, because one of the values of Agile is to constantly deliver value and how do I do that while keeping my design in the code good enough so that I can make it there and keep the pace going. That's been an interesting challenge. That's in my space where I see people who are really trying to continue a sustained performance have to think about and figure out a process that allows them to do that. Design, as sense, is becoming more important in the Agile community and I'm going "Yes, that's good".
It's interesting that, when I go around, I've always have emphasized keeping the design evident in code, but I think that some of the people that were new to Agile development practices say "testing and delivering unit tests with code". Not necessarily doing TDD, but delivering tested code, but they weren't thinking about preserving the design, or thinking about the design complexity that they were creating, perhaps, as they were delivering value. They were creating stuff that might have made the design more complex, may have obscured it because of their coding practices and so they made it hard to sustain the pace of continually delivering it. The more code you can do faster without thinking about design, the bigger ball of mud you are creating and the harder it is to keep it able to grow and support the functionality you have to be delivering.
There are a couple of symptoms that are really obvious. One is when a team gets a story that covers a new territory, a bit of functionality that hasn't been there before. They don't have an idea of how long it will take, so they get these lumpy predictions of story development because they don't know how to add it in or fit it in, because they haven't kept it clean, so it's hard to wedge something in. The other kind of symptom is, as we are making changes to a bit of functionality, the rate of doing that slows down and you go "Why is that? Why is our velocity going down? We have more code, we have tests and it's still harder to get stuff done. What's wrong?" Maybe, it's because I have to go touch other places in the code and the design has been tangled up so I haven't had a good factoring of things. You can sustain a pace of people who are just then wedging it in and maybe it kind of works, but you are not going to sustain the ability to accommodate change and new stuff as well.
Yes. One of the things in my design space that I've been encouraging teams to do - and developers know when they are wedging it in without considering the design hygiene of their code - is at the end of the sprint, instead of just measuring how well the customer acceptance did, how much did we deliver against what we promised - is to also ask developers - because they know - how much debt, or the design debt if you will, did they add as they delivered this functionality.
If you want to consider code as an asset that you can leverage continually, you got to worry about keeping it so that the design is evident, so that I know what to make changes to that I try to preserve it so that when I go in there it's familiar and I know how to extend it. It's not just "Let's wedge something in", because when you do that, and you do it after a while, instead of something that has objects with responsibilities nicely factored, you end up with what I call a Frankenstein design. It works, Frankenstein lumbers along and you don't expect Frankenstein to move very fast. It's not just enough to pass the test - it really isn't. You got to have it in a way that it isn't going to be Frankenstein kind of code.
7. It's true because Frankenstein sort of appears gradually we become immune to it, we just say that's the way our code base is and this is how long it's going to take us to modify it and it slows us.
OK, it's going slower, I guess we have more code, I guess we should go slower. If we accept those expectations, then you'll say "The velocity just went down because our code base is large", but it doesn't have to - it really doesn't have to. If you keep the design integrity there, bolting stuff on results in stuff that is hard to move fast in.
If they have a sense of the values, about the qualities of the code that line up, because not everybody agrees. If we have collective ownerships sometimes people go in there and do their rifts and they don't worry about consistency over places. If they have a common set of values on the XP team and they refactor when they see things getting out of alignment - because when you are going fast you make your judgments on what you know now and when you see it saying "This next story I guess I need to rethink it" at that point you need to be doing the refactoring. If you did it then, it would sustain the pace, but if you say "I'm just delivering the value and I forgot about this" what I know is a hack rather than a design solution that is going to be sustainable. I think that if you really dialed it up, that would work.
Right. Practices without reflection on where we add relative to what our ideal is, is point yourself a little bit back and looking at that, thinking about the qualities that are emerging if you believe in emergent design. We have some ideas and as we implement it, we'll learn more, if we don't say "Now what we did, was that a good thing?" It may have delivered value, and I'm not saying upfront or deep reflection that takes you way off from the sustainable pace, but to think about it and do some measurements about that amongst the team, so that they know if they are holding themselves to their standards, is a good thing. I think probably something that’s missing in the practices that we are talking about.
10. A lot of teams have a mentor or coach or Scrum Master, some kind of a person who is thinking about process. When it comes time for their reflection cycle - be it weekly or monthly - what questions can that person be asking to bring the developers into thinking about their code base, about the quality of their design?
It is evident in the code. Can I see it in the code? One of the things that I think is an easy way of measuring it is: as I'm delivering value, on a cycle or sprint or whatever, am I incurring debt at the same time - technical debt, design debt? At that moment that I created something that has debt let's count up how much debt I'm building as I am delivering value because if you are viewing code as an asset that has to be maintained, if I am incurring some debt while I am delivering functionality. Obviously, it's harder as I deliver more code to pay off that debt, because the more code I add, the more opportunity for accidental complexity and breaking design concepts, mudding things up in order just to deliver that functionality. So, if you were to ask a team at the end of a sprint "How much debt is there? How many debt points did you create as well?", they'll answer that.
If you think of the refactoring - Joshua Kerievsky's book about refactoring - in some sense, we know about code smells and if you had the team say "I know that here's some refactoring that I would have like to have done" and "How many of these are there per story?", that's an interesting measure, isn’t it? "I would like to do these refactorings, I didn't have time, I delivered the story" - that's kind of a measure of debt. Another measure of debt is "I did something in the simple way possible, but it was too simplistic and I know that I didn't lay the groundwork. I just wedged it in". That's another kind of debt, something that's overly complex and not well factored is one debt, the other one is "I delivered something in a too simple way" - because we value simplicity but not overly simplistic. But if you just pass that test and I know you are not going to think about next little case about that kind of debt. Another one is if I can't understand if my tests don't express my design, if they are not testing what the design is, if they are just getting coverage - that's a kind of debt, too. Do my test specifications match with my design expectations? Some people just do a little test that don't talk about "given this context, what do I expect?". The quality of your tests that you are delivering when you are doing is a sign of debt, too.
I think it varies on the team and the new ground and territory they are coming forth, but if you want to start with simple measures, one thing to do is "How many story points did I deliver and what kinds of debt did I incur?"
You are kind of delivering debt, which is a scary thing because you think "I'm delivering value", but you may be delivering debt at the same time - that's actually "What's going on there?"
Yes, and that's OK, to deliver debt sometimes. We need to invest some time - we make an investment in delivering functionality and there are trade-offs, there are always design trade-offs. I know that almost every developer that I work with has at the end of the sprint a set of things they wish they could go clean up - I mean they do, so ask them then. Another thing I think to measure - just getting into this "What I am to measure?" - is that if I'm burning down some stories, how many of those are little easy picks that aren't helping me build an understanding of the core design concepts and the patterns and practices of how I am going to have objects collaborate and that I can understand those so that I can follow those patterns again and again, as I do it.
If I don't lay down the core stuff and understand it from end to end, and I'm just picking off little stuff at a time, around because those are easy and that does a good demo for the customer - I've seen people do that - , how much am I doing that's core and fundamental to get right? I'm not going to say perfect, but I'm going to say right enough to sustain our rate versus how much stuff is just easy stuff that we can just further around doing this. If I know how many stories are delivering core value, from a design sense, versus how much are the rest, - if I also gauge how many stories are core stories for a whole project - then it gets me a sense of how much of the design is yet to be done, really. Design thinking may take experimentation - "I try this, I try that" -, it may take considering options rather than just following the path. I got to do a little thinking, maybe experimental programming and I go "Well, no, I think this way is better".
If I don't realize that I have to do some of that design thinking and testing with my code to prove it out, then I don't really know how much time it's going to take to just deliver the value. It's easy to take the rest - the easy stuff - and say "Our velocity is this. Isn't that cool?" and then you go "Now we have to have domain objects to model concepts, and we didn't do that before because we were just logging on" then it's going to take you longer, if you have to do that stuff. It would be nice to know from a measure how much of the foundational stuff am I doing per sprint. Some stories might be just easy and glossy and they require programming, but not design thinking. There is just easy stuff to do, easy but tedious - I mean there are things to do that don't require a lot of new thinking or experimentation about how do I want to make this work together.
There is always stuff that people -when you start out with a story card - don't know how to approach solving it and that’s core stuff that's fundamental to delivering the functionality needs to be measured against what am I just doing - that's the easy stuff. If I were just doing a project that was an incremental edition to something that's already existing, there may not be that much core stuff to figure out, because they already have it built in place.
That's a good question about design - how much of this design thinking and figuring out what do I have to do. If I'm doing a sustained pace or I'm delivering new functionality that builds off of what's already there, then I know what patterns and practices to follow to deliver that functionality and stuff should fit in and extend and follow the same ways that we do things. There isn't a whole lot of core design work that needs to be figured out, I just follow patterns that are there. I think that there is a different rhythm to this on projects that are adding incremental functionality, than figuring it out.
I was just in a reflection - a Scrum-Sprint reflection - a week ago, and they were talking about "Is this done?" and they didn't talk about how well was it done. I think that's fairly typical to not talk about that with the customer.
If we want to build a whole team, and have the awareness, transparency is what should be going on. We also have to have the trust that a team needs to be able to say "We need to work this off, some of that debt so that we can get back on track". If the customer says "No, that's OK" or "You guys are bad", then you have a problem - that always happens when there is not that trust in the whole team.
But I also see that on Agile projects sometimes the customer or the product owner has to go figure out what to do next and what the nuances of that are, so they are on their side figuring out stuff, why not on the design and development side do we have to go get our house in order, too, from time to time. The more transparent all that stuff is on both the customer side and our side of delivering design value and code, then we are all aware of what's going on. Nobody should panic when you talk about that, because I think they need to tell us that, too, on the customer facing side.
It's started out when I first got exposed to object oriented development - at Techtronics I was involved in the early days of SmallTalk. I came from writing and developing code in a similar language and the SmallTalk stuff was pretty cool and when I got into it I observed that the people who really seem to go fast were thinking about programming in a different way and I wanted to know what is that. It was really thinking about objects in terms of what are they responsible for and how should they collaborate or interact together. If I took that focus of figuring out how to design a responsible object that interacts in predictable ways and collaborates, then I don't have to keep all the details in my head about the design.
Assembly language is pretty complicated, I can trust that I ask an object to do what it should do and it will do its job. Understanding how to structure that so you can build up systems that do pretty amazing stuff requires you to both trust that other collaborator that you are using does it in the right way. Design quality is something that says everything knows what its job is - in this sense of objects - then I can add in new functionality and it fits in to this set of collaborating patterns that they already have. That was like leveraging, but I have to figure out how to do that in a way that will sustain me to add more stuff in. I don't just slim in whatever functionality, I have to figure out where it fits and when you do that, the amazing productivity and the ability to create stuff really fast happens. That is where things came from for me - that passion -, observing how the really productive SmallTalk programmers were thinking about how to design their solutions.
One of the things is that, if I understand who should take responsibility for making sure that information arrives and it’s good, so I can trust it, and that a request happens in a timely manner, if I understand the areas of responsibility between the UI and the work being done behind the scenes, I can identify regions of where collaborations can be streamlined and, because I can take responsibility, one place to keep things clean. Then, within that, things - communications between objects - are assuming that the request and information came in a timely manner, it's well formed, because someone else took responsibility and these sets of collaborators can assume that. So, one of the values of Agile development is simplicity - we value simplicity, we value the ability to make things as simple as possible.
How do I do that? If I know I can trust that you will send me a timely request, because someone else checked, then I don't have to check again. One of the brittleness about code is I do redundant checks when I don't have to - "Why did that check there? Maybe I have to check again here". This is inconsistency: "Who's really responsible? It seems they all are". That sort of breaks down who is doing what and adds that accidental design complexity. If you really try to keep it streamlined, it's amazing what you can do. You can make these simplifying assumptions between trusted collaborators in the design. One of the things that I've been pushing lately in my Skills for Agile Designers is, if I identify trust regions within an application or within a complex system and give responsibilities to those at the borders for doing those sort of checks, then within a trusted region, the collaborations can be simpler - we don't have to be paranoid about all this checking.
20. That's so interesting. It totally mirrors what Agile does out in the world of human communication. We want to increase trust among the collaborators, among the customers, the developers and so on and we are reducing the amount of documentation we have there by having clear communication, knowing who is responsible for what. Similarly inside the code - I mean tests are a form of documentation - you are saying that you don't have to have this exponential explosion of tests, if we know where the responsibilities lye.
That's exactly right. Another thing to talk about is, if I really understand between collaborating objects, what my expectations of you are, and what's the contract between objects, what's the interface, not "How do you do your job?", then I can program to that and it simplifies it too. It's easier using what someone offers me if we just agree to how do we contractually interface rather than "How do you do your job?" - that's not my concern. I just want to know that you are going to take my request.
I don't want to micromanage my other objects that I'm collaborating with - absolutely not! - just have that trust, but I have to have an understanding and an agreement. That idea is really something that I think people who are trying to keep a design clearly in the code have to worry about those interfaces. Are they the right ones? Are they expressing what you expect? Are they defined in that way that's understandable? Those folks that are doing Ubiquitous Language , like Eric Evans, and trying to express in the code the language of the customer, even taking the step further of saying "Well, I want to have objects that do what's expected, that I might even explain to my customers." There isn't that disconnect between the language of the implementation and the language of the business concept - that's even a stronger connection there. I make my domain objects have that set of responsibilities and I name what they are doing in terms that I heard from the business way of talking about how our software needs to do work. That's very profound stuff, that goes on there when people do that.
And making that behavior understandable by someone who is not a techno geek.
That's right. It's not always the case that a customer knows how something should work. I may be forming concepts in trying to communicate back to them what I think I heard them say, if I'm having that close collaboration with them "Is this what you mean? Should we be calculating interest according to these mechanisms rather than saying: compute something?" We got to name and understand in that way if we really try to have that connection. It's a pretty profound kind of set of values you couldn't have when you take this to that openness and transparency of the customer. That's interesting that, if I try to do this behavior driven approach, that I'm not trying to mimic the real world. Back in the early days of object technology we said "Oh, objects are easy because I just find out the concept in the real world". It takes thought work and conversation and then, how should it behave here, objects are not just out there in the real world for plucking - you have to have this thinking about the design and how they really should interact. It's a little bit more thinking than just plucking.
24. In the analogy I made, we have, for example in Scrum, a Scrum Master who is responsible to be thinking about the process and holding the vision for the process. Who should be thinking about holding the vision for the design?
The team should. I mean, you should develop a common mind about how that is. Kent Beck talked about finding the metaphor - that's a way of getting the way of how do we think about the way our software should be designed for XP practices, but I actually find metaphors somewhat elusive. That's one of those controversial XP practices. What if we can't find a metaphor? What do we do? I guess we won't think about it. In some sense, the team has to come up with a consistent way of thinking about it and it may be that there are established patterns and practices we can follow, we know what the concepts are that we are putting in to represent business domain objects. It might also mean that we need to do a little bit of exploration about that before we just dive in. I'm not saying big upfront design, but on Agile projects sometimes you need to understand what the core concepts are and your expected style of collaborations that you may want to try to practice and follow and get that figured out a bit before you just pile on everything. Sometimes people talk about "What do I do in sprint zero?" and a user experienced designer may figure out what the real requirements are, styles for that. Developers might be prototyping or exploratory programming to see what that might be as well. That's something that I see teams doing more of and recognizing if it's new stuff that they got to be figuring out.
Yes, design is back. I think that if you have a lot of it, you are not going to work it all off at one time. In fact, I was talking to someone after the banquet last night who said "I am doing this refactoring project and I'm doing it in a way that I have to keep my production system running. What do I do?" His approach is basically building and testing a new part of the system - he is keeping his production system running and then putting it in. In his case it's not like little tiny refactorings will have the effect. He needs to do more significant - I won't say heart and lungs transplants, but - basically a little more complexity of figuring it out. As he is doing it, he is keeping his test there and running it and then, when it works, putting it in.
That's one approach, of saying "I'm doing this as an investment in preserving the asset. I'm not doing it in a way that's unbounded and I'm making sure that I'm testing it as I'm redesigning it." He has small stories for it and then, when it's ready to go into production, there is not that risk. I've also seen teams when they have a huge lot of debt, they have to sit back and say "What debt can we live with?". We are not adding a lot of functionality to this so the debt is just "We got bugs coming in." That's a different thing than saying "I want to work off this debt in order to enable a sustained pace of new functionality. You want to know why you are working off the debt and do it in a way that's going sustain the functionality that you want to keep doing. That's my advice. The team should take responsibility for suggesting how to work off that debt and it's a collaborative effort between the business and the team. I'm saying designers need to take responsibility, at the end of the day. That's something "No, I don't just accept there is a debt here" I got to take responsibility for what are the pain points that we see and that's something you need to do rather than just say "They won't let us to work it off".
How can I downlaod the video?
But ,I 'm really want to view this video.