Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Bringing a Product Mindset into DevOps

Bringing a Product Mindset into DevOps

Key Takeaways

  • Delivery pipelines are the “mechanism” through which organisations turn ideas into valuable products in the hands of “users”
  • At the most basic level, pipelines are enablers of value delivery; at their best, they can be a source of competitive advantage
  • While there are patterns and best practices, no single pipeline design will satisfy all organisations, nor will a delivery pipeline remain static over time.
  • Consequently, we need to treat delivery pipelines as a product or service; that is, design, implement and deliver them with an eye on what the organisational goals, user and stakeholder needs are currently (and over time).
  • Adding a product mindset to DevOps capabilities is a key ingredient to finding this balance between desirability, viability and feasibility. This strengthens DevOps capabilities and the pipelines they deliver, and ultimately, an organisation’s ability to deliver value. 


To be successful, organisations need two things: products and services their customers find valuable and the ability to deliver these products and services well…

In this article I will demonstrate why - consequently - we must design, implement and operate our delivery pipelines (the means of turning ideas into products in the hands of users) as we would any other product or service: by adding a “product mindset’.

I will approach this in three parts: first, what I mean by “pipelines (and DevOps)”, second, why we should treat pipelines as a product, and third, what a product mindset is, and how in practice, product management can help and be added to DevOps.

What is a (delivery) “pipeline”?

I see delivery pipelines as the tools, techniques and practices through which we turn ideas into products and services which users can then use and organisations can operate, support and evolve. (DevOps, for the purpose of this article, is the discipline that designs, builds and operates pipelines).

I want to take a broad perspective for the end-to-end of a pipeline: the full value chain starting with identifying problems and opportunities, setting strategic goals, defining streams, to solution design, analysis, implementation and quality assurance, to compliance, operations and customer support, and of course, product use.

The traditional “inner” and more holistic “outer” cycle of a pipeline covering the full value chain.
Icons by Flaticon: Elias Bikbulatov

Why do pipelines matter?

Many organisations I work with believe that pipelines are “technical stuff” that sweaty engineers look after somewhere in the basement, but that non-techies, certainly not management, don’t have to worry about…

This could not be further from the truth, because pipelines matter at business level, for three reasons:

Pipelines are enablers

At the basic level, a pipeline is an enabler to turn ideas into products in the hands of users (and subsequently be operate and manage them).

Unfortunately, the pipelines of many organisations are disjointed (there are breaks in the process like stagegates or manual handovers), inefficient (manual testing, manual resource provisioning, limited self-provisioning), or they meet the wrong requirements (over-designed in parts while having gaps leading to bottlenecks in other areas, e.g. the idea that hard to configure GUI driven tools are better than command line).

Surprisingly, this is frequently seen as “the way it is” and organisations and teams largely accept the resulting lengthy and clunky process that results in:

  • Fewer features in the hands of users
  • At a lower quality
  • With slower organisational learning
  • And overall increased pain to deliver and operate (and consequently less motivated teams)

So if for no other reason than good processes, efficiency and effectiveness, you really will want a slick pipeline.

One size does not fit all

I have worked with early-day startups with the key goal of going to market fast, attract users and learn, using lightweight tooling like Vercel or Heroku to keep DevOps cost and effort to a minimum, all while being able to deploy directly to production many times a day, controlling feature availability via feature flagging tools such as LaunchDarkly. 

Financial (but also many other) organisations I work with tend to have a pipeline of more or less separated environments including dev, integration, UAT, staging and production sometimes on prem, frequently with quite heavily formalised and often manual deployment procedures. Ultimately, how these processes are defined depends on each organisation’s culture, stance to risk and quality.

I have also worked with a big government department deploying continuously 100+ times a day across numerous environments, all fully automated and with highest degrees of self-serve capabilities for engineers to provision resources and run tests.

My team also worked with a medical services company that in the not-so-recent past used to burn code and (manually created) release notes for their quarterly public releases onto DVDs to satisfy regulatory requirements.

The point is this: while there are best practice patterns and paradigms for pipelines (such as continuous integration, high degrees of automation, and enablement through self serve), there isn’t a one-size-fits-all off-the-shelf pipeline to fit all organisations, nor one that would fit the same organisation over time; organisations have different needs and demands in terms of how to handle ideas and requirements, create, deploy and test code, how to quality assure, how to report, how to run, operate, and audit, and their needs will change as do their strategy and the environment (new strategic goals, new customer expectations, new regulatory requirements, new technologies).

So we need to tailor our pipeline to what our needs are currently, and allow for evolution so they can become what they need to be in the future. 

Pipelines are strategic assets and can be a source of competitive advantage

If we consider the intrinsic role of an organisations’ pipeline(s) as “enabler” of value delivery and that their design is contextual, then not only should we treat them as corporate asset, but consider them as a source of competitive advantage.

Over three years my team helped the medical services company I mentioned above to streamline the regulatory process: allowing compliance to raise risks against epics and stories, link them to features, codebase, test cases and test results (in their backlog management tool) and automatically spit out release notes covering and linking all these aspects in traceable and auditable manner. This reduced effort of around 20 man days per release to literally nothing, and it increased the quality of the release documentation.

By deploying directly to production, the startup, acting in a highly competitive space with fast moving innovation, had an edge over other companies by being leaner (fewer people) and being faster to market (more value for customers, at a lower cost and faster learning cycles).

So if we get our pipeline right, it is not just another corporate enabler, but can become a source of competitive advantage.

Pipelines matter, so treat them like a product

If pipelines matter, and we can’t just get one off the shelf, we need to treat them with consideration and care, knowing what the right thing to build now and later is, and give guidance to our (DevOps) team… So in other words, we need to treat them like a product, and add a product mindset.

Consider this: we would not ask an engineering team to “just build us” an ecommerce website or a payment gateway, or a social media app. We would support them by defining goals, research what is valuable to our users and provide this as context and guidance to our designers and engineers…

What is a product mindset?

A product mindset is about delivering things that provide value to our users, within the context of the organisation and their strategy, and do so sustainability (i.e. balancing the now and the future). 

For the purpose of this article, I will use product thinking, product mindset and product management very much interchangeably.

Creating product-market-fit by balancing desirability, viability and feasibility as the job of product management

In practice this means achieving product-market-fit by balancing what our users need, want and find valuable (desirability), what we need to achieve (and can afford) as an organisation (viability) and what is possible technically, culturally, legally, etc (feasibility), and doing this without falling into the trap of premature optimisation or closing options too early.

To give a tiny, very specific, but quite telling example: for the medical device organisation we chose Bash as scripting language because the DevOps lead was comfortable with it. Eventually we realised that the client’s engineers had no Bash experience, but as a .Net shop were far more comfortable with Python. Adding a user-centric approach which is part of a product mindset at an early stage would have prevented this mistake and the resulting rework.

How do you “product manage” a pipeline?

Ultimately you just “add product” which is a flippant way to say you do the same thing as we would with any other product or service.

For a startup I worked with, this meant that the lead engineer “just put a product hat on” and looked at the pipeline through the lens of early business goals: use an MVP to gauge product market fit with a small, friendly and highly controlled small group of prospects. Consequently, he recommended to opt for speed, e.g. directly deploy to production, feature flags to manage feature availability, AWS Lambdas and AWS Cognito. We would then monitor business development and scale / build more custom features (e.g. authentication) as and when required (rather than build for a future that might never come).

The insurance company from our earlier examples had asked us to help them build a platform to support 100+ microservices and cloud agnosticity (to ensure business continuity). As this was a complex environment, we added a dedicated product owner to support a team of DevOps engineers. First she facilitated a number of workshops with the product and engineering teams to understand how they currently worked and what was in place. It quickly became apparent that the organisation was missing milestones promised to their clients, because engineers could not release code efficiently (due to manual steps and resource constraints when moving code between environments and provisioning resources). It also became apparent that the organisation would only have three microservices for the next 12 months and that cloud agnosticity was a long-term aspiration, not a must-have requirement at this point.

Digging into what “value” really meant for the organisation, everyone agreed that right now the teams needed to build and release quality features and hit the milestones promised to customers. Consequently, the product owner reprioritised with the team, creating a roadmap that would focus on removing the blockers resulting from only two engineers being allowed to manually deploy code to staging and production, then empower engineers through basic self-serve (self-provisioning of new microservices and other resources based on standardised templates). Initially this would be focused on one cloud provider, but with future cloud-agnosticity in mind. Pointing out that there were only three microservices at this point in time, it was also agreed to address building a microservice mesh at a later stage, as and when complexity required this…

Tools to product manage a pipeline

Generally speaking, the tools and techniques to “product manage” a pipeline are the same as those for “normal” product management. The following “framework” is a good starting point:

1. Establish context

Start with setting the scene. Understand the context in which the pipeline will operate. Define and align on:

  • What near, mid and long-term goals the delivery pipeline needs to support
  • What key opportunities there are, what problems can be solved
  • What the key constraints are, and what is possible

Remember the medical services example: the initial brief was to containerise existing applications and move them into the cloud. While this was necessary, during our analysis we found that this alone wouldn’t give the organisation the expected benefits of increased throughput, but that this could only be achieved by streamlining the regulatory approval process. 

Modelling of any existing process is highly useful at this stage, especially with a view on bottlenecks and missed opportunities.

2. Identify potential users

As a second step, you will want to understand who will be using the pipeline, benefiting from it, and be impacted by it. And you will want to take a broad view here.

You’ll have your usual suspects like engineers, QAs, DevOps engineers, but I suggest you expand to cover a wider audience including product people, sales and marketing, specialist stakeholders, such as in the case of the medical software example compliance and regulatory bodies. A stakeholder map or onion is my preferred model for this, but a simple list might just do fine.

Example stakeholder onion for the medial services organisations, focusing in on regulatory compliance stakeholders.

3. Identify users’ jobs, needs, gains and pains

In a subsequent step, you will want to understand what jobs these users need to accomplish, their needs, related gains and pains, their expectations and requirements. The value proposition canvas or a similar model, or user personas, work well here. In a subsequent step we can use these tools to also start identifying potential solutions for each of these “requirements”.

Note that you may not know where to start, but you also will not want to over-analyse. Here a service blueprint or an experience map can come in handy, as they allow us to link users, needs and pain points, thus allowing us to identify where it is worth to spend more analysis effort. Experience maps or service blueprints are also excellent communication tools that we can even use to show progress.

Coming back to the medical services company, consider the compliance manager: they are worried about identifying risks and one of their needs is to demonstrate traceability (solution: integrate risk management and the backlog tracker), but creating release documentation is long-winded and error prone (solution: automate document generation), and they would love it if it was all directly submitted to the regulator (solution: integrate).

An adaptation of the Ideation Canvas by Futuris to identify user expectations and potential solutions (as alternative to the Value Proposition Canvas by Strategzyer).

Experience Map illustrating the process and pain points.

4. Prioritise

Finally, based on all the previous work, you’ll want to prioritise: what to do first, what to support next. A feature map is the perfect tool for this. Here it is best practice to group features into releases that address organisational and team goals over time, thus linking back to the goals identified in the very first activity, creating our product roadmap.

For our medical services company, this meant:

  1. Enable a basic end-to-end process so that teams can easily deploy code across all environments
  2. Create a live environment certified by the regulators
  3. Enable compliance documentation automation
  4. Enable strong self-serve capabilities

Example feature map indicating four “releases” and with each prioritised feature, based on the concept of the Storymap “invented” first by Jeff Patton

Build vs buy

A frequent question that arises is where to invest and innovate, where to build, which aspects to own, which to outsource, buy, or rent.

I find that Wardley Maps are a great tool to use when making these decisions, as they guide our strategic approach based on what is relevant across the value chain and where the various solution options are in terms of industry maturity, which then informs us on whether to “build or buy” and whether to enable commoditisation or strive for a defensive and whether and how to enable or prevent commoditisation. 

Illustration of Wardley Map, for example medical device company, illustrating that there is competitive advantage in innovating the regulatory compliance process

Returning to the medical services company, the Wardley Map for their delivery pipeline confirmed that a good integration server was important, but also commoditised, and that we should choose a best of breed solution- obviously. More importantly, it indicated that automation of the compliance process was a source of efficiency and competitive advantage, but that there was no existing solution, and that we should innovate in this space. The question the Wardley map subsequently posed was whether we should IP protect this process and keep it proprietary, or whether it was more beneficial to work with competitors and regulators to create an industry standard.

When’s it done?

The above activities are especially useful in the early stages of working on a pipeline, for instance during an inception. This inception toolkit provides a pattern and templates which my teams use to set up initiatives. However, as with any product development, you are never done; product management is a continuous activity, not a one-off.

Organisational goals will change, user expectations will evolve, technologies become outdated and available. Consequently, the pipeline has to adapt and evolve, too. Just think of an ever-changing compliance landscape, or how an organisation might find themselves in one industry in one market today, and in totally different ones tomorrow; also, how we have moved from on-prem hosting to cloud, to server-less, but also how new technologies such as big data and ML have brought up different needs in terms of infrastructure.

Where does that leave you?

Adding a product mindset is beneficial

The feedback I have had from teams and clients, as well as the measurable improvements (throughput, cycle times, quality, value delivery) clearly indicate that adding a product mindset to DevOps is not only a nice-to-have, but a must.

For DevOps engineers it makes their lives easier, it allows them to focus on the right thing, it empowers them: at the most basic level it removes noise and worry linked to not being clear on what to do, and allows them to create a slick pipeline that makes everyone’s lives easier; at its best, it allows DevOps to create a strategic asset for the organisation.

For the organisation, it makes sure we deliver value, it enables product delivery and ensures we are using funding wisely, by supporting the creation of a pipeline that allows all parts of the organisation to work towards achieving strategic goals and reducing the risk and waste that arises when teams are not sure what they should be doing, are not clear on who to listen to, or which solutions to focus on.

So where does that leave you?

We can “add product” in a very lightweight and informal way by “just keeping it in mind”, or  in a more formal way by adding a dedicated product specialist to support DevOps engineers. This means that teams have options to suit their appetite, culture and budget.

When the proposed tools and practices strike a chord, and when you feel comfortable to get your toes wet, there is no reason you couldn’t adopt them tomorrow. You don’t even have to do “everything”: any of the techniques I mention above on their own will add incremental value.

Where this is all a bit new, just grab one of your product colleagues and start involving them in your analysis and decision processes more or less loosely…

Further information and a recording of a conference talk on this topic can be found here.

About the Author

Rate this Article