Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Five Principles for Enablement with (Almost) Nothing to Do with Building Tools

Five Principles for Enablement with (Almost) Nothing to Do with Building Tools



Steph Egan shares five principles used to build support, make a broad impact on teams, and inspire change at the BBC.


Steph Egan works as a Principal Software Engineer @BBC.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Egan: My Instagram ads fluctuate a lot between all the things which really make me want to click them. Meta have certainly got that algorithm right. One thing I really struggle to resist is productivity apps. Just use this app and your life will change for the better. This idea of something simple giving me all the things that I want, time, feeling organized, clarity. I download the app, maybe use it for a week, maybe a month, if it's particularly good. Then I ditch it. No real-life improvements made. It's so enticing, I do it over again. I see this at work too. Part of what I do is to work with teams to help them improve their continuous delivery practices. Usually, the team who's invited me feel that they're moving too slowly, and they really want to do better. Depending when I get pulled in, we might talk about what they're trying to achieve, what their limitations are, and maybe even workshop their processes to find the bottlenecks. Figure out how they can improve.

What I find with these teams is that they seem to inevitably decide that moving to a new CI/CD tooling system will speed things up. They say they spend far too long maintaining their existing system, and this new one is going to make things so much faster and so much more reliable. These tooling migrations can take significant amounts of time, usually months, sometimes longer. Do these tooling changes speed them up? Often, no. We seem to have this innate desire for these tools, though, whether that's my productivity apps, or a new CI/CD tooling system. We love tools with the potential to solve our problems. Engineering enablement is the home of tools with the potential to solve our problems. We build stuff to solve teams' problems. As we've seen, tools don't always solve them. Fred Brookes wrote a paper called, "No Silver Bullet: Essence and Accidents of Software Engineering," back in 1989. He said, there is no single development in either technology or in management technique that by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity. That's a lot of words so let me rephrase it a little bit to show what I took from this. The thing that I really took was that there is no technology or practice that on its own, will provide significant improvement in productivity, reliability, or simplicity.

Background & Outline

I'm Steph. I currently work as a principal software engineer in delivery engineering, one of our enablement teams here at the BBC. My team works to improve our team's development and release practices. Knowing how much we love these tools, and that they alone are unlikely to solve our problems, I like to completely mostly ignore the tools that we build and instead talk about what I learned in my previous role, leading the acceleration team in the iPlayer and Sounds department. What we did to improve things for teams alongside building tools. I'll give you five principles for enablement teams, which focus not on the tools but on how we work ourselves and with others. Each principle will have techniques that we've used to apply them. I'll talk about some of the challenges that we've come across along the way as well.


First, let me paint a picture of what working in one of the teams in iPlayer and Sounds was, around five years ago when we started the team. iPlayer and Sounds teams are responsible for the BBC's TV and radio streaming services, iPlayer and Sounds. While these are two huge products in their own right, the BBC itself is much bigger. iPlayer and Sounds is one department in a collection of many departments across our engineering space. Each of these had their own structures, their own ways of working. iPlayer and Sounds was around 30 teams, which were generally pretty small, about 8 people in a multidisciplinary team. They had full ownership of their infrastructure that they deployed to, the technologies that they used, how they deployed to them. This allowed for a large amount of variation in how individual teams work. We saw this variation across different slices as well. First, we saw divides across the different offices that we had. We span three different sites in Glasgow, Salford, and London. We also saw divides between the two products, Sounds and iPlayer. Finally, we saw divides in the platforms that teams were developing for so we provide clients for TVs, for web browsers, and for mobile devices. Each of the teams working on each of these clients have different technologies, different limitations, different restrictions. We also have some backend teams as well who support these clients. Each one of these has slightly different ways of working and slightly different limitations around what that can be.

Where We Started (License Compliance)

Where did we start? The first problem that we tackled as a team was one which is maybe not the obvious choice, we started with license compliance. Understanding the licenses that our software dependencies have, and making sure that we're working within those licenses. It does sound pretty dry, and maybe not what you'd expect to hear from an enablement team. As a key part of enablement, which we were really keen to tackle is to improve practices and knowledge. This license compliance was an area that teams had very little knowledge in. There were very few practices across the teams, and it just hadn't really been tackled broadly. Any tooling that you provide for license compliance can get really interesting around how you approach it, because they can become governance, a set of rules which teams have to maneuver around, causing more pain rather than less pain, which wasn't really what we were trying to do.

Knowledge Sharing, and Making Things Which Could Be Improved, Visible

This brings me to my first principle. We share knowledge and make things which could be improved visible. One angle we could have taken here is to insist that all teams add an extra step in their pipeline, which blocked specific licenses or required them to acknowledge it before the pipeline moved along. Ignoring the fact that that would have been a huge amount of work for not only us, but also the teams themselves, it would only make sense if the teams fully understood why this was important, and how they could benefit from it. Otherwise, it's just a red checkmark from a random team, who is now stopping them from getting the really important changes out to live, not the way to make friends. Instead, we left teams in control of their builds, and focused on educating teams around what licenses mean, what situations could be problematic, and how to handle them. We put all the decision making in the team's hands and respected the decisions that they made. After all, they have all the context around their software, not us. The way that we did this was through GitHub Issues and report generation. This is generally the first thing that a team would see, an issue raised on their GitHub repo with some details about a potential problem. This links off to more specific information put together by our legal team and a chat channel to get more help if they needed it. When this tooling was released, it caused waves of conversations around licenses, even in teams not using the tooling. It improved understanding across teams, and created a place where teams could ask questions. We had hoped that this was something that we could provide to almost every team across our department. Unfortunately, it's a little bit spotty in some areas, depending on the languages used. We're able to get pretty good coverage.

Knowledge doesn't just live in our tools. However, another way that we worked to help teams is in a consulting capacity. One of the successes that we had was with teams who were working on the TV applications. These teams had made huge strides in their processes. They'd moved from release trains every two weeks into multiple deployments a week, but this was putting more strain on their processes. They really wanted to smooth things out a little bit more. The first thing we did with this team was to run a pipeline review workshop, which is a little bit similar to value stream mapping. These teams worked in a monorepo and followed a single process. We got all the team leads, the test leads, delivery managers, product owners, and everyone else interested in shaping the process, into the same room. We mapped out the entire process using loads of Post-it notes, making sure that everyone could contribute so that we got a full picture of what was happening. It was a lot. I couldn't fit it in a single photo. We talked about how long things took. What was manual, and what was automated. What confidence each step gave people. At the end of the session, we had a new process that they wanted to achieve, and the beginnings of some steps to get there.

Since that workshop, they saw significant increases in the amount of releases that they were able to put out, and a general increase in satisfaction across engineers. I love this graph, because I can point to it and say that the workshop caused all of this improvement. Of course, it was the work that the teams put in after the workshop that really made these changes. It was a catalyst, and it gave them the direction that they hadn't had previously. Not all of the workshops that we run had this impact. It really depended on where the teams were at, and their desire to change. All of the teams found that they had a better understanding of their current processes after we'd run them. Workshops like these have the added benefit of giving the facilitator a view into how the team are working and the processes that they have. This information is invaluable when understanding what direction we want to move next as an enablement team.

Outside workshops, we would also provide more general support and guidance. We would work directly with engineers on teams who required specific assistance. This could be to help them move to a different continuous integration system or large refactors in how they were using their tooling. It could be as simple as going through their intentions to check for any got you's, or suggest other approaches. Or it can be as in-depth as working with that team for a week or two. Working directly with teams in this way can be a bit time consuming and a little bit difficult to line up. Sitting with engineers, getting into the weeds gives a huge insight in how that team works, what problems they're facing, but also gives that team a direct contact, who's suddenly more approachable than before.

One of the things that we've adopted more recently is to provide guides and documentation, which maybe seems a little bit odd in hindsight. I think a lot of this was due to our position. We strayed away initially from giving direct advice, keeping as much autonomy within teams as we possibly could. Now our position is a little bit different, which I'll talk more about later. We're putting a bit more effort into guides and recommendations. One thing that I have found when we do this work, give suggestions and recommendations to teams, is that it only really works when teams are fully on board with making change happen and listening to the advice that you have. It can really vary between teams, sometimes depending on where the team is at, or even when we got included into the discussions that they're having. They can generally be looking for encouragement or reinforcement, rather than a change in direction, which can be a little bit disheartening sometimes, but the view that you get at the team working with them, and the information from those interactions is still really valuable. More often than not, those suggestions will come back up further down the road once the team are ready to make those changes. We share knowledge and make things which could be improved, visible. We do this by providing information overrules. Bringing information to teams via tools or workshops. Providing hands-on support that teams can call on.

Building and Fostering Communities and Relationships

The next principle is that we build and foster communities and relationships. Understanding where teams are having problems is key to being able to solve them. We need to build up those relationships to be able to understand that. Some of the ways that we did that was to create and run an organization-wide community of practice around development and release practices. We started this community not long after starting the team, and it continues to be an invaluable source of information. It's been running for about five years now, I think. We run it once a month. It can take the form of discussions, retrospectives, or presentations. There's also a very active chat channel which allows teams to ask any questions which crop up day-to-day. This is a retrospective that we ran quite recently, to see what areas teams were looking at, or having problems with. Individuals who attended were able to learn, ask questions, meet others who are in similar situations as them. For us as an enablement team, this originally gave us a view of where all of those other departments were, making sure that we weren't duplicating effort or just generally understanding where the rest of the organization was. It also gives more information about the teams that we're supporting as well. That information is so valuable in informing our direction, and giving an indication of what issues teams might be likely to come up against in the future. Someone's got to be first in finding some of these issues, so it can give us a bit of a flag as to when things are going to happen. It's like the undercurrent of the organization a little bit. This community also included teams who were doing similar things to us in other departments. Generally, all of these teams had different focuses or angles that they were looking at these problems with. It allows us to get to know these teams a little and support each other.

Within the iPlayer and Sounds department, there is an existing structure for what we call communities of action. These are across team groups, which meet once a fortnight, and are more directed than broad knowledge sharing. The spaces for individuals to improve their skills, explore new techniques, or technologies, and collaborate with other individuals to experiment in solving problems. One example of this is the teams working on TV clients, they spun up a community of action to improve their continuous delivery working practices. A huge amount came out of that guild, including their move away from regression testing, moving CI/CD tooling systems to improve performance, and the adoption of their pipeline by a team which has given them significant improvements as well. These guilds give us a slightly different type of view on teams. It's more future looking, where a team's looking to go next, what problems are they interested in solving, that kind of thing. Enablement are able to bring some expertise to this group, encourage collaboration, or cross-team understanding, and some guidance as well.

One of the things that we struggled with quite early on as a team was ensuring that everybody understood what our team was and how we could help. We were quite different from other teams, so it was a bit easy for us to get forgotten about. One of the ways that we improve that was by making a concerted effort to appear in places where other teams were, so all-hands, engineering forums, leadership discussions, that kind of thing. This was really key to building up relationships with the teams that we were supporting, and making sure that we had contacts in every single one. We build and foster communities and relationships. We do that by making sure we have regular knowledge sharing communities. Cross-team communities for in-depth learning and exploration. Being visible where other teams are. All these techniques allow us to build up context around the organizations and in specific teams, which means that we will have a better chance of teams adopting what we build, but also feeling like we're a resource that they can rely on.

Respect Other Teams' Time and Uniqueness

Next, we respect other teams' time and uniqueness. Our product teams have a lot going on, loads of priorities to manage, different angles to keep up with. The time that they have available to start learning new tools or changing how they do things is minimal and it has to be really valuable for them. Particularly when our team was still in its infancy, it wasn't clear to every team what we were about, or how we worked. If we came knocking on their door about something which they might legitimately want to fix, but just didn't have the time to focus on it, we would just get told to join the queue. I think the other thing here is that we have to tackle this problem with a base level of respect for the work that other teams are doing, their practices, and the decisions that they make.

How did we go about showing this respect to teams? Our teams had experiences of telling, which was built internally. These experiences could include things like the tooling not working for their particular scenario, lots of ticket raising, long documentation to read, or large amounts of jargon to understand. It was confusing for teams to understand what was available, what would help them, and how much effort it would take. A lot of teams, because of this, opted to build their own tooling. Sometimes it was so ingrained that they didn't even bother looking for anything else. We had to book this trend. We wanted to show respect for teams' time by putting as much effort as possible into not adding any more work for them. Often, we did this by removing ourselves as a bottleneck, usually ensuring that what we would build is self-service.

The thing which really exemplified this, to me, was a service that we built really early on, it's called broxy. I can't take credit for it at all. It was another engineer who came up with the idea, but it really solidified our approach as a team. It's the simplest thing that we own. It's generally the most loved. Why? Because it bucks this trend of things being difficult to use. Broxy is an authentication layer, allowing teams to host their own static internal tooling, but not having to think about authenticating users. We built it for ourselves really, and figured that we were probably going to want a few websites, but a teammate said, we should open this up, let anyone use it. We did. Teams add a couple of extra lines of config to define a policy for users to access their static website via broxy. It's completely self-serve. They have full control of that policy, and it now fronts tools all over the organization. A lot of the reactions that we had for this was surprise at how easy it was to set up. Is that it? We wanted to build on that reputation going forward to make sure that teams know that we're on their side.

The second way that we show this respect is to accept that all teams are unique. Because the teams that we support are so varied in their working practices, technologies they use, there's no way that we can improve an area for every single team. There's certainly no way that we can improve all areas for all these different teams. Each team has their own specific collection of maturity levels across different areas. Maybe they have a great process in place for their dependency management, but they are still relying on manual deployments. Maybe they're really good at deploying their application, but they struggle when it comes to infrastructure changes. Probably they're somewhere in the middle of all these. This has impacted how we talk about our tools, and even what the tools look like to build, at least initially. One example of this was our metrics platform, which provides teams with information on how their development or release processes is doing, so that they can track improvements that they'd like to make.

This has been a big project, and we wanted to get a large amount of coverage across teams. Initially, we plan to provide the metrics to teams so that they could use them. What we saw was a significant amount of variation in usage of them. Some teams jumped at the opportunity to use these figures, using them to inform improvements that they wanted to make, and other teams just didn't know what to do with the data. In this case, it's particularly interesting because the teams who don't know what to do with the data are probably the ones who could most benefit from understanding it. With still tackling this problem more broadly, but I think the challenge here is understanding what this lack of adoption is due to. Is it lack in functionality? Is there too large of a barrier to entry, or maybe it's just not something that the teams want? For this project, we think it's a mixture of a lack of functionality on top of a lack of knowledge. We're providing more options within the tool, and support alongside it to accommodate different teams. Specifically, we're providing more information as part of these metrics to help them be more useful. Also, we're providing workshops, and presentations, working directly with teams to help them get the best out of the tools, providing suggestions for how they can be used, and walking through them. We respect other teams' time and uniqueness. We do that by removing ourselves as the bottleneck. Excellent documentation. Keeping what teams interact with simple. Providing a variety of options to accommodate different teams.

Radiate A Sharing Mindset Through Collaboration

Next, we radiate a sharing mindset through collaboration. When I started the team, I had ideas which could probably last us about 10 years. We were a small team, and there was no way that we were going to fix everything. Instead, we had to work with others to be able to make big impacts. I mentioned broxy earlier, that tooling acknowledges that we can't fix everything, and that there's still space for teams to build their own tooling. Our team are certainly sitting on the shoulders of the team who built our deployment platform. One of their principles was to ensure that everything that they built also provided an API, allowing other teams to build tooling. They acknowledge they didn't quite go far enough. They left a bit too much space for teams to build their own tooling in isolation. For me, the missing part of this is collaboration. While APIs or an authentication platform like broxy allow for teams to do what they need, while getting out of the way, it doesn't allow for teams to come together to improve what exists rather than making things new. We were a small team, so we needed people to come together and help us with some of this.

There are multiple ways that we tried to achieve this. First, we made a real effort to accept contributions to anything that we built, which was from outside of our team. This form of collaboration is a token itself. It certainly wasn't the default. Generally, the expectation was that teams were too busy with their own backlogs to focus on anything coming from outside of the team, even if they wanted to. Instead, we really had to put our openness to contributions front and center and make sure that people knew that that's what we were about. We had a policy that we would merge any PR that came our way. We did this to show teams that we cared about the effort that they've made, and that we wanted to work with them to make our tools work for them. It might sound a little bit scary. We didn't necessarily mean that we merge to anything without question. It did mean that we would make a concerted effort to work with them to make sure that their changes got over the line. We also made sure that our contribution docs were encouraging and easy to use. We stick to languages and tools which were mostly common across the organization.

We also worked alongside other teams, which were similar to our own, tying up any tooling that we both provide. For example, another team managed the observation platform for the iPlayer and Sounds department, among others. We worked with their team to provide deployment annotations from data that we already had into their system. This was a huge win for us, and started to show how consistency can really make these sorts of enhancements much easier. There's a lot of places this has been really hard, though, since different teams have different focuses to us and often different mindsets. It's an ongoing effort to understand how we can improve our alignment. Teams like ours are definitely not the only teams building tooling, as I've already mentioned. A lot of teams build their own tools. These are often made with intentions of them being open. There's just so many of them, it's not clear what's still supported. Unless a team is particularly good at pushing their tools, they just get added to a never-ending list. This is, if I search our organization for wormhole, which is part of the deployment tooling which is provided at an organizational level. These are mostly CLI tools. It probably won't be all of them in this one set, but there's a huge amount of duplication here. One way we're starting to improve this is by recommending tooling, which is made by other teams, be that teams in other departments or our own.

When we find services which have good amounts of support, and are being used heavily that we're confident that we can encourage usage of, we share them. One huge example of this is Releasinator, which was built by another team. Releasinator automates a very specific workflow, so it isn't something that we were likely to handle, it just didn't have the breadth that we were looking for. This team really needed it to improve their processes. While building it, we looked with them at other things that could benefit from its functionality. They discovered that it would be helpful for some other services that they build as well. With some encouragement, they ensured that it was usable by any team in the organization. We're now able to recommend it as a solution alongside all of the tooling that we provide. This support and encouragement is starting to build up a collection of tools, which have clear levels of support, good usage patterns, which is beginning to make waves across the organization. It means that we're able to help teams think about what makes good tooling. We radiate a sharing mindset through collaboration. We do that by prioritizing contributions to anything we build. Working with other teams doing similar things, making an effort to tie things together. Supporting and encouraging usage of other teams tooling.

Aim for Long-Term Improvements

Finally, my last principle is that we aim for long-term improvements. When we started this journey, we were trying all sorts of different things, looking at different areas that we wanted to improve. We specifically took on small quick wins, with a hope that we could start making some small improvements. These were helpful for us as a team because they were often pretty concrete and obvious, and helps us get our name out there a little bit. We definitely didn't see large impact from these in people's day to day. Although they're useful, they didn't solve a lot of the problems that our engineers were seeing. They were additive rather than ingrained in people's workflows, kind of nice to have. We also worked on larger goals alongside these. These larger projects have, in hindsight, been more successful. This distinction was pretty hard to see at the time, particularly because we have seen improvements over much longer time periods. For example, when we built tooling around license compliance, there was an initial wave of interest, which then died down. Now we're seeing an uptick in adoption, again, across a wider organization, without much effort from ourselves years after we started work on it. I've seen similar things when working directly with teams. A team lead messaged me the other day saying, "Months ago, you offered to talk about how we can improve our deployments. Are you still up for that?" This unpredictability and uncertainty can be a little bit of a roller coaster as far as confidence goes. One thing that we did to improve that was to pull together some key feedback metrics and changes that we'd made over the last 12 months to understand where we were at. We shared that with teams too, and it gave a great opportunity for us to call out anyone who'd collaborated with us.

More recently, my team have gone from focusing specifically on a single department to supporting all of them as part of a group of teams focused on engineering enablement. For us, this meant going from supporting around 30 teams to supporting well over 100 teams. It does make some things easier because we're working together with some of the teams who've been building solutions for the organization for quite a long time. We can make sure that our goals are aligned with them by design. It's also forced us to change our expectations, however. We do now have a large view across teams, which gives a much better picture of the state of the organization since we're just exposed directly to more teams. This has increased our confidence around whether the problems that we saw within the iPlayer and Sounds department was specific to that department, or a broader trend which needed to be focused on. It does make it harder to build up direct relationships with teams, though, and we're still working out how best to do that at scale.

One impact that this has had on our work is that while previously, we were trying to cover all of the teams that we worked with, we'd realized that this is just not achievable quickly at this scale. There will be teams who aren't ready for change, yet. There will be teams whose processes are so different that they will take large amounts of time and effort to bring along. Instead, we're looking to get the most value that we can quickly by supporting the broad majority, about 80%, we're thinking right now. We aim for long-term improvement. We do this by measuring a project's success over longer periods. Keeping records of your progress, and reviewing it to smooth out the roller coaster. Tackling the bigger picture tends to bring more success, there probably aren't any quick fixes.


We started with this statement based on a quote from Fred Brookes. There is no technology or practice, that on its own will provide significant improvement in productivity, reliability, or simplicity. We've certainly found that we have yet to find a silver bullet. Instead, we're taking a variety of approaches through building tools and the techniques that we apply alongside that to work towards improvement. We have also seen teams who gained success, speeding up their processes. These teams tend to gain that through a variety of approaches, some of it tooling, some of it practices to achieve their goals. Finally, I have found a productivity app which did make a big impact. I've used it for quite a long time now. Interestingly, this app has a heavy focus on techniques for using it and support around changing your mindset beyond just using the tool. I do still keep downloading more apps, though. These are my five principles for enablement, which have almost nothing to do with building tools.


See more presentations with transcripts


Recorded at:

Sep 01, 2023