Simon Bennett discusses Contracts, Scaling and Agility
Recorded at:

| Interview with Simon Bennett by Katherine Kirk on Oct 02, 2014 |

Bio Simon Bennett is Managing Principal for LASTing Benefits, a consultancy specializing in Lean/Agile adoptions. A popular conference speaker and workshop facilitator, he has significant expertise in how the systems and scenarios we find ourselves in drive our behavior, and specializes in bringing about self sustaining organizational change through cognitive complexity and systems thinking techniques

Sponsored Content

Every day, Agile practitioners and thought leaders are defining and refining new and existing practices that bring Agile’s values and principles to our world of work, play and service. Agile2014 reinforces our understanding of proven methods and illuminates some of the exciting new innovations that represent the future of Agile.


1. Can you, Simon, tell me a little bit about yourself and how you got to the point of being at this conference?

I guess it starts around about the late 90s when I first started using what we call now agile methods, things like XP, and Crystal, and even Scrum to some extent, and I started out as a practitioner, these were all just solutions to problems, for me, at the time. There was no advantage in doing any of them and I missed the beginning of the hype of the Manifesto until I started getting calls from recruiters around about 2006 looking for agile coaches, and I was like what is an agile coach, and they said exactly what you’re doing, except you don’t build anything, you just tell other people what to do. And I always remember hanging up and saying this is the stupidest idea I ever heard and about I guess 10 years later here we all are, including me. So, after I left being a practitioner, I started doing what we now call agile coaching/training and did standard training and coaching for a couple of years and after a while I noticed that you could get an organization to almost “agile perfection”, but when all the coaches left the building, sometimes it just collapsed like a house of cards which is a great model if you are a consulting company; it’s not particularly great if you are that organization. And that’s what got me interested in basically Systems Thinking and ultimately Complexity Thinking: how do actually get something that is self-sustaining as opposed to something that comes and goes, like a fad cycle?


2. And, particularly what will you be talking about at this conference?

At this conference it’s continuing a series of talks that I probably started in 2009 on agile contracting, so if I get back to I think it’s agile 2009 in Chicago, back when there were agile workshops, I did a three hour workshop that I called the prisoner’s dilemma because I realized at that point in time is we knew how to do agile but when you are selling an agile contract, an agile approach which was like “hey, you don’t know what you want so we’ll figure it out together” against somebody going “yes, you absolutely know what you want, I’ll give it to you guaranteed on this time and that date”, we had this incentive in the market place for everyone to go in that direction. And so I started thinking about it in a novel approach because it’s brought up some things from the long distance past in terms of game theory, in other words what were people incentivized to do in the market place and that got me to understand that when we write a contract, what we are actually doing is setting up the rules of a game and the rules of the traditional contract game was zero-sum, which is basically I win you lose which is an antagonistic relationship, and that is completely against the whole agile mindset which is the co-creation of value mindset.

So it was like sitting there saying we have to be careful what we do because – coming back to what I was saying before - there is this larger system that we are incentivized not to collaborate but to basically exploit each other and generally misbehave. And so I’ve been talking about this on and off for a few years and actually it was Microsoft that invited me I think last year to basically say “hey, this session is really, really good” because I go through all this simulations and tenders and things like that, and people find themselves screwing each other over even if they don’t want to because “I’m so agile, I’m so agile, I love everybody but I can’t help myself to rip you off because it just feels so good”. So I rebooted that and since then I added in the whole complexity thing, if people are familiar with Cynefin, getting people to understand where legal reasoning and how the law works and actually how things work and I guess the new thing is if you are sitting around waiting for the contract law to change, it’s not even going to be your great-grandchildren who are going to be write a legally enforceable agile contract, so what do we do about that situation?


3. What are the core elements of that, give people a quick overview or summary, if you had to keep it nice and concise?

The core elements of that is really, and this is something that I learnt from being an in-house agile practitioner to going to being a consultant, was that if you look at traditional agile it solves a lot of problems by removing moral hazard, in other words it basically sits there and goes hey, we are all on the same side, there is an us and a them, but we are all on the same us; and all the early books and all the early techniques it’s all about sitting there and going “no, we are all in this together” and then people, (it sounds offensive, but I am using naïve in its genuine use), is that people naively started using it for outsourcing, not realizing that you had methods that basically dealt with moral hazard by denying it existed. And as soon as we got this contracting relationship we created this giant moral hazard boundary where basically our interests are not at all aligned and a lot of the time people are just sitting there going “well, be more agile”.

So this session is really along the lines of what can you contract for legally and about creating those preconditions of basically making outsourcer and outsourcee the same unit and actually extending the agility upwards. This is kind of coming to the conversation that we were having earlier which is there is a big difference between contracting for agile and agile contracting. What a lot of people want to do is they want to contract for agile, in other words they want you to contract for you to do standups and sprints and story points and planning poker and all that sort of thing and that’s what a lot of people will say “can you help me write an agile contract”, and actually they don’t want an agile contract at all, they want a contract that allows them to do all of the agile rituals, whereas an agile contract is actually contracting for the agile benefits not the agile methods.

Katherine: Now, you and I were talking earlier today and this whole “agile and agility” [subject] came up as a discussion and it appears you have been thinking through some of those concepts in relation to scaling.

Yeah, so I tweeted this a couple of years ago and it’s been an idea in my head for a while, but there is a difference between scaling agile, agile at scale and scaling agility, these are all actually different things. And most of what people are talking about when they are talking about scaling agile is scaling agile, scaling the agile practices. It’s again, they contract you for agile, they say “I want a contract form that allows me to deliver things in sprints”, and none of these things are necessarily bad but they are not the same as certain other things. And pretty much the same as the idea of scaling agile is scaling lots of uses and practices so maybe I have about 300 developers, 600 developers and I want them all sprinting, I want it to to look like a big dashboard and I can see these little charts and stuff everywhere, there is lots of stuff happening at the edge. However, as scale goes up we actually make it more rigid, so there is lots of agile happening, but we actually end up with less agility, we’ve actually limited our agility to the tail end, in fact the scale itself limits our ability because if we make a change up here then we have this giant roll on down to all these dependencies.

So I started thinking about the concept of scaling agility and what does that look like and it leads us into a couple of very interesting places because it’s not enough to basically replicate the practices. In the same way that you don’t build a bridge, the same way you don’t ford a stream, you can’t scale agility by simply sitting there and going I think all the managers should have a standup; what you actually have to do is pop the why stack and go what are all these practices trying to do and what does that mean at a larger scale? And again, this is not criticizing agile, this is basically just saying if you want to get up to this concept of scaling agility we have to think of it in another way. I was doing this diagram and I am not sure you can see it, but if you think about any organization, basically they have their strategy, they have their capability and then they have the things that they do, the features, the needs, all these things that go on. And what we have in some cases is that even when you scale agile, it’s still pushed down from strategy down to need and this is that classic “oh, we’ll have a big vision up here and a vision down there and a vision down there”, and then there is this almost assumed competence that all of these things we need we have the capability to actually do and that may be true and there is no concept of this around this cycle.

There is a slightly cheeky line going up this side which is often these capabilities have been pulled up by management as a fad, no one has actually sat there and go oh, the one that comes most strongly to my mind is when everything had to be object oriented in the early 90s and people were going around “we must be using C++, we have to have objects”, do you know what that is and do you know what it’s for, “no, but I know we need to have it”. And I think this is what we have in some cases of agile, people are adopting good things for bad reasons, so you go in and they go “do you think agile is working for us” and I was like “well, what did you expect it to do”. So this is almost what I am talking about, this is still this rigid structure of: strategy flows down, we assume capability and we deal with this.


4. Is that still strongly hierarchical?

Yes, still strongly hierarchical. And because its strongly hierarchical, it’s not very responsive to change at an organizational level and this is the irony in my mind - that there has been a lot of talk in the last couple of years about “agile is great for complex problems” and it is, but what we get when we scale agile is we get the dealing with the complexity at the cold face, is what I am calling it, and if you are a large multinational organization, that’s the least of your complexity problems. And so we are dealing with the cold face, but we are actually ignoring the complexity that just simply comes from scale and you end up with these very brittle rigid structures that when they turn the corner, all of a sudden they don’t have the right capabilities because they are actually trying to pull these capabilities from these needs.

Really the center of the triangle is all these things still change and scaling agility is really about pulling all these three variables into the middle and it really comes down to scaling agility sitting there going, OK, so rather than user stories, we might have strategy stories, we might have a little strategy requirements, so what you try to pull in is “what capabilities do we have?”, as the capabilities might inform our strategy, “what strategies do we have?” because these things might inform all these little bits and pieces, so in other words thinking about in terms of far more dynamic model where everything informs everything else as opposed to being a top down model.

Katherine: Bringing them closer together? Almost collaborating the concepts, if that makes sense.

Yes. And it’s even beyond collaborating the concepts, it’s one of the problems we get, and this is a key attribute of complexity in humans is that we get very target-fixated. If you think about companies like Blackberry they’ve just released yet another phone with a keyboard on it and it’s like but that’s what we do here, that’s what they do here and we get this target fixation, we get this confirmation bias, we get all these million and one, you can get out there and get a million and one cognitive-bias sessions, and what we don’t necessarily look for is opportunity.

We don’t actually sit there and change our frames all of the time and what I am actually talking about is that if you really want to scale agility you have got to move down from this entirely hierarchical approach where we are driving things down, to a human sensor network approach where people are always on the look because at that cold face there is a lot of valuable information.. and we’ve got people… we are always thinking about delivery out, we are not thinking enough about information up and the idea is that you might have a technology group that has a tremendous capability and they can sit there and if they’ve got awareness of the overall strategy in sizes that they can consume, they go “do you realize we have the capability to do this”, and management is not being oblivious to this but they have no idea because there is no information flow up in the other way. Same thing is like with needs and users, you realize we need these features, and then management can actually look at the needs of the features of the users and then that can inform strategy. It’s this more cyclical organizational approach and the scary bit about all of this – and this is a-week-old thinking- is that when we come to the idea of scaling agility we might actually have to start thinking about actually changing the practices at the bottom end.


5. So what things have you been exploring?

One concept I’ve been thinking about is riffing on the whole user story concept, and I was thinking about user stories, and I was thinking about is one of the several ways in which they fail when they scale and there is a few things. When we go back to user stories; so actually if I flip one level up, when we are talking about larger organizations we are talking about complex systems and when we are talking about complex systems there are three core heuristics for dealing successfully with a complex system and the top one is disintermediation, and what disintermediation means is removing any gap between where the problem is and where the decision maker is, where the buck stops. And you can see sort-of echoes in this in the whole Lean and Agile space. The classic one is “Go To the Gemba”, the way you go to the Gemba and you see with your own eyes and this idea of getting raw data, nothing filtered. Second one is finely grained objects, so lots and lots of great disposable grained objects and the last one is distributed cognition and when you get a small scale.

XP, for example, was a tremendous example of something that handled complexity at the cold face really well because you had the onsite customer, you had somebody with actual skin in the game that would end up with the software, dealing with it, so you had this single point of disintermediation which was really really good, and what’s been happening ironically in agile is that as we’ve scaled up we have actually increased intermediation, so in other words we’ve replaced on site customer with a product owner, so you now you had a proxy in place and now we scale up we can’t get enough product owners, so we end up promoting BAs into sort of user stories machines. And you end up with the fact that the user no longer write the user stories, somebody writes it for them on their behalf and the more you scale up, the more intermediation you get, you’ve got more and more and more layers, it’s that we have this weird perverse effect, the more you scale up the more complexity you have but the way we are scaling, we are actually dulling down the heuristic that helps us deal with complexity better. And so if you think about the whole point of user stories, they work fantastically in the small but then begin to fracture and we put them in the large, I started thinking what we want is not user stories but users stories, so we wanted to start getting rather than one user story, we wanted to start getting data about how people live their day to day lives.

And if you come back to this looking for opportunity rather than driving down strategy, this is where you get into the concept of micro-narrative, so in other words what we want to do is we don’t want to go in and say we know what to do, because as soon as you decide that, you only see things within inside that frame, including when you start off with “we’re going to scale agile” you start off with the assumption we are going to scale up software. And so what we actually look at is how do we do build some software to deal with unbelievably complicated bureaucracy and system we’ve got and we are not actually looking that we can actually go to that manager over there to change all these ridiculous policies because they are antiquated to deal with a problem that we no longer have. And so even agile itself, with its supposition of software, can get people thinking what software do we need to build to solve this problem as opposed to this is stupid what we are doing right here and actually we might be able to solve that with process changes, organizational changes or things like that. So the idea is becoming into the idea of constantly gathering data about how people are doing their jobs, the data then informs you of where to go looking and then what you actually want is that disintermediation again, is you want little snippets; think about a ticketing system, so Jabe Bloom often talks about a ticketing system- if you go to a helpdesk, it’s full of micro-narratives, full of everything that is causing people pain, and so the statistics tell you where to look, tell you where the problems are, rather than somebody deciding and then all of the stories about this happened and that, that gives you the human texture and then real people can sit there and go that is the actual problem and then it begins to move you forward into these things.


7. Right. And with scaling agility you are looking more at trying to drive progress or innovation or innovative thought through looking at data and trying to get that human element and seeing patterns?

Yes. And you can start to have strategic discussions around: we want our users or employees to tell more stories like this one and less stories like that one. So in other words rather than talk about these features … there are so many talks still, sometimes is frustrating being in this industry because you sit there going there is still these sessions about how to separate what from how, get people to think about business outcomes and you sit there and there go “right; there is a key part about this, it’s not just assumptions but interpretation”. And one of the fact is that we are always interpreting and that was the key part about the original user story was the only person interpreting that story was the person that wrote it.

I still think the best description of user stories was Ron Jeffries’s Three Cs and that still maps onto our user stories because you end up with these boundary objects which is “I have a card and I understand what is written on it, I don’t have to interpret it because it’s little symbols from my own mind which trigger what I actually mean and then we can have a human conversation where I can use everything that makes me human”, and you can’t go “no you don’t understand what I am saying, you don’t understand what I’m saying”; whereas if I hand you a document you would sit there and go “as soon as I decided something feasible that I can do from this I can start work and that’s really I want to do”.

And this is what we are getting back to, is that when you get into the large scale you can’t write thousands of user stories , you can’t have thousands of conversations, so what you’ve got to do is record people’s voices and record not interview questions, but record snippets out of their daily life, record the highs and the lows and that, when you go in a strategy discussion, you can sit there and go “this over here, this is horrible, we don’t want people to have stories like this, we don’t people to feel like this and these things over here, how do we move those people up to that part”. And that’s what we do.


8. And when you say people you mean customers?

It could be customers, it all really depends on what the software is for, maybe it’s internal systems, whether you see that as employers or customers, it doesn’t really matter but the users of the system.

Katherine: So I know that we are talking about quite conceptually, but on a practical level, what are some perhaps primary elements that you are starting to think about in terms of practical applications of scaling agility? What would be say a couple of things someone would take to work tomorrow and try out or think about or look at? You’ve got the user story concept, but how could they practically challenge themselves to move from scaling agile into maybe a scaling agility kind of space? Not so much on a high level, but on a small one.

That’s a really good question and I think that is still something we have to think about and talk about because I think to a lot of people (if I think forward to where this could go is a scaled agile organization), an organization that has scaled agility might not as recognizable as doing agile.

Katherine: Interesting.

Because it would help with some visual aids, but one of the things when you come into these things - because there is a secondary part to this - I guess if I were to talk, to give somebody advice is if I think about things like Scrum, if I think about things like Lean Startup, if I think about things like user experience and all of these little bits and pieces, if you think about where they start in terms of the investment required to kick them off, we should probably stop thinking about or and start thinking about and, because this is a little bit early on but the concept is different problems may have different sets of solutions.

Katherine: They are contextually driven.

Contextually driven, so I can think of a range of anything. So in other words if you can gather a cluster, say you have strong evidence for something: number one is starting to think about ways to collect evidence and decide you are not thinking about forecasting but actually looking at your organization or your customers the way it is now; and there is a session I went to yesterday and I was actually sitting there and going “observed behavior is still the best behavior”, and this is the whole thing: think about what is happening with my customers that I could collect and collate and analyze now, what could I get to do that to find out what is happening now before you even think about a vision for the future. Because a lot of people have a vision for the future and then they collect evidence for that, where I am actually sitting there and going “figure out a way at your data and figure out what you can do now, and if there are really obvious trends, if there’s really things that people hate about your software, then that’s a good start for a Scrum or Kanban project” which is sitting there and going “right, we will identify, and want to spend a million dollars on that and we want to spend six months on that and that’s reasonable and we want to do that” because it has coalesced into it at a certain point.

At the other end of the spectrum you might have what we call weak signals, which is indications of problems to come, there is not a lot of data there, so sometimes you don’t want to just look for clumps, you want to look for outliers, so that’s where it’s looking at the details is important. So if you think about a helpdesk, if you look at the detail in there, you might sit and go “well, there was only one person but that was bad, if that happened for 30 or 50 people, that would be a catastrophe”. And that might be one of the places where you might want to do things like set based design, or just get a pair of people actually working on those sort of things like that.


9. So, would you say it’s primarily about examining what currently is and the state that your customers are in or whatever they are going through? And then from there, responding to that with the most effective methods or approach rather than choosing the approach beforehand and trying to fit within?

Right. Because sometimes again, if we come back to the complexity thing, I think to sum up complexity for a lot of people it’s nice to go back to the whole butterfly effect, the butterfly flapping it’s wings over there can cause a hurricane over there and we were so into this mindset of “the only way we can large change is by a large program of work” is that sometimes two guys just pair programming out and working with a couple of specific users making changes can actually have enormous changes at scale. Say you have 3,000 employees, they are all doing a particular task and you have two developers working for three days that manages to shave 90 seconds off the time that these people take to do this task, and they do that task 30 times a day, you’ve had a massive impact from a short result. And it’s literally this concept of working with now as opposed to constantly forecasting and predicting a future state, this is where we want to go, this is where we want to go.


10. [...] Are you saying that in this place of now we will find out what we need to do next?

Katherine's full question: I think there is a significant amount of fear that’s attached to what the future may bring, because I guess we are in a very complex environment, a very fast paced industry, and to say we are focusing on the now can cause anxiety because what about what’s going to happen next? So, are you saying that in this place of now we will find out what we need to do next?

Yes, you’ll find out what you need to do next by figuring out what the problem is now and I bring out basically two things to close this; this is fundamentally the difference between that being worried about tomorrow and that fail safe robust, how do we protect ourselves from what’s going to happen, and once we figure out what’s going to happen- which is by the way quite impossible- we’ll figure out how to protect ourselves from it. Whereas the resilient approach is literally thinking about how do we recover quickly, to basically just coming to acceptance that bad things are going to happen and how do we recover from it.

And there is, I don’t know what you would call it, a cone or a parable, and it always strikes me with this, there are birds sitting and they are sleeping in a dead tree with rotting limbs and a monk looks up at the birds and says why do the birds trust the tree, the birds have put faith in this tree and I can see the limbs are rotting, it shows how foolish these birds are, and his master says to him the birds are not trusting the tree, the birds are trusting its wings. And that to me is a resilient organization, that to me is scaled agility, when an organization doesn’t have that fear about the future because it trusts its wings, it doesn’t trust the giant tree of process it’s created, it trusts its ability to do anything in a sufficiently fast timeframe that it will be able to adapt.


11. Nice. Well, that’s very interesting stuff, good stuff to meditate on. So what’s next for you after this conference?

Getting on another metal tube, and I am going to leave the dome and get on a metal tube for 18 hours and I am going to go back home.

Katherine: To Australia.

To Australia.

Katherine: Thank you very much.