BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Decentralised Development: Common Pitfalls and how Value Stream Management can Avoid Them

Decentralised Development: Common Pitfalls and how Value Stream Management can Avoid Them

Key Takeaways

  • The three main areas that are affected by decentralising teams are risk, dependencies and governance
  • What value stream management is process that enables people to manage these dependencies and optimise their overall workflow to deliver more value to customers 
  • There are some good practices which can be applied to implementing value stream management
  • There are some important metrics which can be used to help optimise value delivery
  • Effective value stream management can help decentralised teams to work more efficiently

I see it all the time: a big corporation learns about scaled Agile, decides to completely embrace agile ways of working and begins breaking up the teams. As part of this, they often decide to transition to product-oriented teams, stop utilising release managers, and instead hand over all the workloads to the product team.

In most cases, this creates a catastrophe. This decision essentially attempts to eliminate the team managing risk, leaving it to be navigated by individual teams. Now, the people in charge of driving the pipeline are putting all their faith in the new process and hoping it works. These sweeping changes also put architectures in flux because teams are unsure how to navigate the new architectures effectively.

The three areas in danger of pitfalls

By decentralising your development, you’re exposing your team to an array of pitfalls with the most apparent impacting your risk, dependencies and governance.

Risk

Even though you’ve created autonomous teams and empowered them, you’ll still struggle to find who actually owns the risk of delivery. Ultimately, it should be the product manager, but often, teams are still unsure if the product manager is in charge of risk. This uncertainty creates risk for the company because when things blow up, they blow up for everyone, not just the team. Understanding that risk of delivery and who owns it is key, and managing that risk is the next step.

Dependencies

Despite changes to product teams that attempt to be independent, the existing products still have both code and functional dependencies. This means that there are dependencies between teams. When one person makes a change to an application, there are implications for others, such as how to test a scenario or how to make sure value is being delivered. Even if your team is releasing dark on a particular time frame, you have to ensure everything will work together. This is crucial because customers care about functionality and user experience, so they expect everything to be cohesive. Take online shopping for example. You wouldn’t expect shopping carts to work differently on each page, would you?

Governance

Dealing with regulatory compliance and governance is where the next pitfall can appear. As teams get more autonomous and smaller, they may lose sight of a broader view and fall out of touch with the governance and compliance guidelines that the organisation has in place. However, they remain responsible for delivering the end product that customers want and expect. But who maintains overall visibility?

Individual teams have the insight to see how they’re doing, but they lack the visibility into the larger team as well as the time and resources to do so. This puts the onus on the portfolio manager to identify which teams need improvement and present the biggest risks. For example, in a recent conversation with a healthcare customer, non-compliance could cost them more than a million dollars per day, which clearly displays the importance of having visibility of the larger team. This also sheds light on why it's important to know why remaining compliant is absolutely critical to all business functionalities.

The solution is VSM – but not as you know it

The answer to these challenges is VSM. But more than just value stream mapping, the answer truly lies in value stream management, and the platforms that provide this. The two terms can be easily confused, especially as they are frequently known by the same acronym (for the purposes here, VSM stands for value stream management, and ‘mapping’ will be clarified as required).

Value stream mapping is the process of visually collaborating on a value stream, which is essentially anything that delivers a product or a service. It provides an overview of all of the activities that different teams carry out during a delivery pipeline, in what order, and how they depend on each other.

In contrast, value stream management is the process that enables people to manage all of these dependencies and optimise their overall workflow to deliver more value to customers. Practicing VSM typically involves a third party tool to help automate many of the mundane manual tasks that usually ensure pipelines move forward, such as checking in repeatedly with different teams and cross-checking all of their coding. The ultimate aim is to reduce the waste that normally occurs within the value stream, to speed up the process and increase the quality of the output.

There are three key reasons why VSM is needed in an organisation:

  1. Organisations are complicated, often with multiple value streams that cross over and interact with each other
  2. Digital transformation is driving a move towards more Agile ways of working, meaning the teams are increasingly asked to improve and measure their performance
  3. New regulations require businesses to meet a higher level of compliance, which takes time and effort, and increases the risk of failure

When implemented effectively, VSM can massively improve the software delivery lifecycle, and it all starts with a simple value stream map.

Getting started on your value stream journey

Step 1: Building a value stream map

A value stream mapping exercise should involve all of the teams that would ever collaborate on a release. Bringing everyone together ensures that all parts of the process are being recognised and tracked on the map. Ideally, there should be two sessions, the first focused on building a map of the current value stream. This is essentially a list of every single action that is completed from start to finish in the delivery pipeline. It includes all of the governance tests that need to be conducted, how all of the individual actions relate to each other, and which actions cannot be completed until something else has been done first.

It’s important to be very thorough during this process, and make sure that every action is accounted for. Once the entire map is complete, you are left with an accurate picture of everything that needs to be done as part of the release pipeline. Not surprisingly, most companies don’t have this visibility today, but it will be invaluable moving forward. For product managers in particular, having a concrete outline of all of the processes that are occurring gives them a clear sense of all the moving parts.

Following this, the second session should cover the future state. The aim here is to produce a similar map, but one that focuses on what the organisation would like it to look like in the future. This gives you something to aim for and measure against as you work to improve your processes.

Once you have these maps, they don’t specifically do anything on their own – this is where VSM comes into play.

Step 2: Using a VSM platform

Managing value streams by continuously improving them to deliver more value to customers requires constant analysis of how work flows throughout the value stream. However, a lot of the work of software development is invisible. It’s easy to lose track of all the activities performed on a work item. But by making work visible in the value stream, and being able to understand how value streams perform, leaders can identify opportunities for improvement and measure the outcome of each effort. Overall, this can dramatically increase the speed and quality of their software development and delivery over time.  

To do this, a VSM platform takes the map that you have created, and by analysing the movement of a release along the pipeline, it identifies activities that are taking too long, or where work is not being completed to a high enough standard. It looks much deeper than just these two areas, but these are typically the ones that slow down a release and lead to bottlenecks. The goal of VSM is to optimise the length of time from the idea to the realisation of value, providing the customer with exactly what they want without compromising on the quality.

Case study: QBE

One example of a company transitioning to using VSM is QBE. Its previous rigorous release process involved a complicated web of dozens of manually updated spreadsheets, a large team of offshore developers and testers, and information fed in from five different IT Service Management tools. The teams of two release managers were forced to spend hours of time manually inputting and updating data, impeding their work.

Now with Plutora’s VSM Platform, the first and clearest benefit is that the multitude of spreadsheets has been reduced to a far more manageable one. Each team can now update their activities in the platform, and the information is updated in real-time. Release managers can see the status of every activity from one clear central source. Not only release managers, but upper management have also benefited from the tool’s visibility. Senior leaders now have all the information they need that is up-to-date, easy to understand, and instantly accessible. Overall there has been:

  • 33% reduction in deployment time
  • 75% less time building deployment plans
  • More than 20 fewer hours a week spent updating spreadsheets

Making VSM work with metrics

Mapping and VSM work in harmony together because the map provides VSM with all of the areas it needs to focus on. One common mistake, however, can be that teams will only complete the mapping process once, and then not return to it. Ideally, it should be a continuous process where the maps created the first time are revisited frequently to ensure that they are up to date and include any additional activities that possibly could have been missed out the first time. Given that the initial value stream map would have been created with input from multiple teams, it may be challenging to get all of these people together regularly. However, the value that is gained from keeping up with these sessions more than compensates for this.

Another pitfall to be conscious of here is being overwhelmed by all of the metrics that the platform provides once it is up and running. With so many in one place, it can be tempting to think the job is done and decide that these measurements are enough. They’re not. Release managers need to ensure that the information these metrics provide is used to improve the pipeline and identify any areas which are slowing the releases down. They also need to keep in mind that metrics help track progress towards achieving the goal of impressing customers.

To do this, start with the biggest problem area – find out what’s going wrong and coordinate with your team to fix it. Next, move onto the next biggest problem and repeat the process. Some initial key metrics to guide improvement are the four DevOps metrics that measure the throughput and stability of your DevOps pipeline.

Circling back to decentralised teams

When you decentralise development, the goal is to make each value stream independent, so that each feature is not dependent on another. It's the transition from a tightly-coupled or monolithic architecture to a loosely-coupled one, which is called decoupling. When decentralising teams, an enterprise will also go through this software architecture transition. The reason behind this change for many businesses is because in a tightly coupled system, cross-dependencies are codified into the components themselves, and therefore any change to the behaviour of one component often requires changes to components across the entire system.

In contrast, a loosely coupled architecture is composed of elements that can stand independently and are resilient to changes in the behaviour of the components they collaborate with. Communications between components are typically conducted using an asynchronous channel, allowing components to process events and messages individually without impacting the operation of the component that sent the event or message.

Since much of the requirement for centrally-coordinated change comes from tightly-coupled systems and incidents caused by unpredictable system dependencies, reducing these dependencies is key to protecting the teams from system fragility. Teams take ownership of these architectural discussions and drive cross-team conversations through communities of practice/interest or agile-at-scale techniques such as Scrum of Scrums. Additionally, organisations practice inner-source where teams can see and change (with visibility and peer-review) each other’s systems.

Conway’s Law tells us that we will design systems that look like our organisational communication structures; if our teams are autonomous and loosely coupled, so will our systems architecture be. Using a microservices and API model leads teams to a place where they can test and deploy small pieces independently. It means there are more pieces to manage, but that’s the trade-off. Because of this, despite loosely coupled architecture clearly having many positives, it can be tricky to manage manually. The separate components still need to be brought together in the end to produce the final output, meaning that the teams – though independent – must still produce code that fits with code from other teams.

A VSM platform shows all of the interdependencies between these different features in one place, providing excellent visibility. This screenshot is a good example showing which releases will impact which teams/organisations.

It’s easier for individual teams to be on the same page because this visibility allows collaboration and streamlines efficiencies, so all teams can access the end-to-end metrics as well as having an overall view into the health of software development and delivery. In addition, VSM fosters governance by utilising workflow and orchestration techniques, ensuring that governance is implemented every step of the way.

Ultimately, this allows organisations to have traceability that maps initiatives across teams to see what its dependencies are, and what will be delivered. By using VSM platform analytics and business intelligence, individual teams can track progress using metrics at every phase of the software delivery pipeline.

As the software development world moves towards decentralised teams being the norm, organisations need to prioritise the efficiency of these teams to ensure that the speed and quality of product releases don’t drop. Investing in continuously improving how your value streams operate – how you develop and deliver software from idea to realisation – can enable organisations to deliver higher quality faster, all while managing risk. Value stream management platforms help organisations achieve this by making work completely visible across value streams, and providing the governance, business intelligence, and workflow orchestration and automation needed to manage and improve them.

About the Author

Jeff Keyes has been with Plutora since 2017, and throughout his career has been writing code, designing software features and UI, running dev and test teams, and consulting and evangelising product messaging. Outside of 6 years at Microsoft, he has been primarily focused on growing startup companies.

 

 

Rate this Article

Adoption
Style

BT