BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Investing in Impact - Portfolio Management for Agile Deliveries

Investing in Impact - Portfolio Management for Agile Deliveries

Bookmarks

Ben Williams and Tom Roden gave a keynote titled “there is no secret sauce” at the Agile Testing Days 2015. In their keynote they explored how you can use agile and lean principles in portfolio management to increase business agility.

InfoQ interviewed them about getting project managers involved in agile journeys, using product reviews to decide what to develop next, how you can work with hypotheses in portfolio management, measuring the actual impact of software products and managing product portfolios using measurements.

InfoQ: You mentioned in your talk that sometimes project management didn't get involved in the agile journey. Can you explain what you mean with this, and why it is important to involve them?

Williams: The agile manifesto came from a bunch of engineers who wanted to do things in a better way. When we deliver a product we use a large cross section of skills, not just engineering. In a larger organisation, this includes project management. In order that we deliver a complete solution, we need all the people involved in delivering that solution, to be working in a similar way. There becomes a problem when there is a difference in expectation between parties involved. Given our way of working comes largely from technology teams, we have a responsibility to explain the benefits we can achieve to others involved in the work. We think this is particularly important when considering how we manage our investments or organise our work. We can provide much richer information about impacts of software delivery than most project managers expect.

Roden: As Ben says, the agile revolution in software was driven to a large extent by the work and thinking of software engineers, challenging the status quo and the problems inherent in sequential development models, that were contributing to the high proportion of failing software projects. Agile software development was often implemented with value placed solely on feature teams. By feature team I mean cross-functional teams including business customer, the Product Owner, independently capable of delivering business features and products from concept to cash. There was no mention of middle management in general and many agile methods were self-contained, they didn’t need an independent Project Manager.

As the relative success of agile development projects started becoming apparent, more agile transformations took place, placing increasing pressure on the existence of middle management roles. Through transformation, Project Managers suddenly started finding themselves in a very different climate, did the Product Owner now do their role, did a Scrum Master do it? For many Project Managers there was no role left and they had to consider other options, like moving into a feature team in a flatter organisational structure, performing another role like Scrum Master or moving on.

Rightly or wrongly, the role of Project Manager remained in place in some companies, the role was re-introduced by some others, particularly larger companies working with bigger bodies of work - programmes involving many ‘agile’ feature teams for example. Companies forgot to update the Project Management toolkit though and in lots of cases we’ve seen companies also forgot to update the people, by which I mean train, educate, inform them about the key principles of agility, how to support it and how to take advantage of it. This resulted in many Project Managers applying traditional thinking and tools into agile projects. This included things like tightly managing scope and trying to fix it down early on; managing project progress and success based only on scope and time; requesting very precise estimates; measuring just velocity or worse, effort.

In our talk we didn’t go into the argument for and against having Project Managers per se, we were just commenting that for many of those that work in agile today, they had not been through the same training, experiences and understanding. Nor had they been given updated tools for the job of ‘managing’ large bodies of work using agile principles and self-organising teams. Wherever companies decide there is a role for a Project Manager, and many large scale companies practicing agile do, including some with very capable agile engineering functions, then there is a need to engage them in finding better ways to harness the potential of agile and lean principles.

InfoQ: Agile teams often use product reviews to get feedback to find out if they are developing the right products. You mentioned that additional to this getting fast feedback on the impact of their products can be helpful to make decisions on what to develop next. Can you elaborate on this?

Roden: Just to state, I think end of sprint reviews, showcases, software demonstrations, whatever else we use are very valuable exercises. Engaging the customer and stakeholders in reviewing the features and products delivered is crucial for validating the right product has been built compared to expectation and also for exploring ideas, sharing feedback. Seeing it in the flesh is very different than from a design mockup or specification. Using practices like acceptance test driven development means teams have more success at building features that do what customers (or their representatives) have asked for.

Just because the product does what the customer wants, doesn’t mean the product will be successful though. That success is based on the extent to which the assumptions underlying the business hypothesis are proven to be valid. The financial return on investment often comes at a lag too. So, in order to test the hypothesis underlying that business case we need to measure the impact our software has on the world.

The world surrounding software products is complex, and so the way in which one given feature changes the world and provides income may be very hard to see, because a number of other factors and variables are in play. For example, there may be other changes in the same product ecosystem at around the same time (from rapid software deliveries) affecting user behaviour, or changes to the business marketplace, to people, to business operating procedures - the list could go on and on. Due to the complexity of the world that products are used in and the lag between introducing a change and seeing the end financial return, we need to introduce a proxy measure that is a good leading indicator for whether we can expect financial return - we call these impact measures. Rather than just review whether the feature delivered meets customer expectation, we also want to measure if it had the impact we expected on the business and on the behaviour of users of the product.

Williams: Getting feedback from a product owner on how software delivery is proceeding can be valuable. It is especially important if we are at a stage of product development where we don’t have any users. Ultimately however, we should be looking for feedback from our actual users. As soon as we have people using our products, we should be looking for insights of them. We should be looking to what our users do rather than what they say they do as the two and not necessarily the same. Once you can see the actual affect of your software investment, you can be in a position to make future choices based on fact and not on gut judgement.

InfoQ: Can you explain how you can use hypothesis in portfolio management when deciding to invest in making a software product?

Williams: We like to use the underpants gnomes business model from South Park to illustrate why hypotheses management is important in portfolio management. The Gnomes know what they want to do, collect underpants, and they know that they want to make money from doing that. In software we often see the same thinking, write some software and then make some money. Unfortunately we do not often take the time to articulate the hypotheses surrounding the investment. Our SPDIR model uses 5 artefacts and 4 separate hypotheses. Separating the artefacts with hypotheses like this enables us to really focus on invalidating or improving our confidence in a software investment. Additionally it becomes apparent that certain people in the organisation will be able to provide different insights into different hypotheses. For example, someone in recruiting will be in a much better position to tell you if your hypothesis about bringing on another team for your predicted cost at your required time horizon, is reasonable or not. Essentially, articulating the business case with explicit hypotheses acknowledges the uncertainty involved and focuses governance and steering discussions around the risk associated to the hypotheses.

Roden: A project, workstream or even a feature has a business case because it has discrete business value and a cost. As Ben alludes to above in the fantastically funny South Park episode about underpant gnomes, in between cost and return are some other linked hypotheses. These hypotheses need to be made explicit so they can be tested, discussed and challenged as part of analysis and decision making on whether to invest up-front. They then need to be planned for before committing the change into the delivery pipeline and validated to inform decisions about whether to keep investing in this area, as change is iteratively (or incrementally) delivered.

The SPDIR model Ben describes above is how we frame those linked hypotheses for a piece of work. The objective is to try to validate, or invalidate, those assertions as quickly and cheaply as possible. This is a nested process, so can be applied to anything from the small to the large in terms of work item, as long as it has some discrete value.

InfoQ: Do you have examples of how you can measure the actual impact of software products?

Williams: We talked through some examples during our talk that were taken from our work with financial institutions. In particular, we helped one client focus their effort on capturing the real work drivers for their operational teams. This was something that they were doing previously but data capture was manual and lagged by up to a week. Unfortunately this meant the value of the information that they spent time collecting was significantly reduced. When we helped focus them on automated, real time collection they managed to collect much more data, including the number of failed transactions, number of unmatched settlements and total number of settlements. This actually had the nice side effect of providing an operational dashboard of the health of the business unit.

It is also very easy to apply this to user behaviour. I have worked with clients who are forming the description of features as business cases. They invest some money to see an impact and then return. With one of these companies, they invested in improving the onboarding flow and they measured the impact that this had to activation rates. The investment meant that less people exited the funnel before they completed their onboarding. The higher activation rates ultimately led to a higher number of users and greater branding revenue for the company.

Roden: Some recent examples I’ve been working with are about getting products to reduce manual effort, avoid fines, payouts and damaging image through failing to meet service levels. Reducing fines and payouts will have an easily measurable financial return, but that will fluctuate naturally over time and so a better measure of impact is needed to tell whether the software changes made are likely to have had a positive effect. One such payout is interest on borrowing stocks in a financial trading environment, by measuring the number of times we have to borrow stocks each day we can see whether software change reduces that.

Another example involves the risk of losing money through default or getting fined for not processing business transactions in time. The impact of software change on this can be measured by the number of transactions that get fixed within a day, also by the size and risk weighting of those transactions.

An example to consider from social media, look at Twitter’s recent change from a star to a heart as its way of ‘liking’ stuff. The goal seems to be making the application easier to understand in order to attract new users. So how might they measure the impact of that change on the goal of attracting new users, just the number of new users on the platform? What about existing users stopping using it? Existing users may just stop using it and not delete the account so should they measure traffic in number of tweets? Why not count the number of users that use the heart icon and compare to how many were using the star icon? Maybe there are some other data to further analyse the impact of that change, types of user, age etc. It is likely that several measures would be useful which leads nicely onto your next question :-)

InfoQ: Can you elaborate how you can create a dashboard of measurements for managing product portfolios?

Roden: As you start to collect impact measures for the features and projects you are delivering, you build up an array of metrics that serve as a radiator for the health of your business. These are the things you are interested in changing with the software you introduce. As new change is proposed, the likely effect on these metrics can be discussed and challenged as part of the analysis of that business case.

Also as change is delivered into production, various factors can be checked continuously from a system perspective across the business - because change may have desirable effects on one area locally, but be detrimental to other areas of the business and overall not be a good investment.

The dashboard provides a good leading indicator of impact to revenue and also can be used as an input into where future change might be targeted. Performing something like a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) using that data as an input can be useful for working out what needs to change and what investments, projects are having the most impact on the business.

Williams: This starts with the SPDIR model. The SPDIR model highlights the hypotheses of the business case. A hypothesis is inherently uncertain, we have some confidence in a particular outcome but we know that our hypothesis could inaccurate. We acknowledge that uncertainty by using ranges for each of the elements in the SPDIR model and extrapolate over time to form a plan. We use our Ranged Planner to visualise and track our investment. This framework allows decision making within the portfolio to be conducted at the appropriate level. While the actual levels of the elements are within range, then the team or individual product owner can make autonomous decisions. If the actual level of one of the element breaks through a range then we need a conversation higher up the organisation about how to proceed with the investment.

InfoQ: In your talk you gave an example of doing a retrospective to improve investment decisions. Can you describe what you learned in this retrospective?

Williams: We retrospected on all the elements in our SPDIR model. Spend, People, Delivery, Impact and Return. The investment that we retrospected on had not generated any income at this stage so that was a point of retrospection that in turn drove us to look at the other elements. We realised a few main points.

In the original business case, we realised that our ‘spend to people’ hypothesis was overly optimistic. Essentially we originally thought that we would be able to bring on more people more quickly than we were actually able to achieve, although the cost of the team once on board was roughly inline with expectations. We learnt that we needed a longer lead time for projects that required onboarding people.

We realised that our ‘people to delivery hypothesis’ was also too optimistic. We estimated that it would take 8 weeks to integrate the software investment, this actually turned out to be more like 12. We learnt had first hand evidence that we as a team were optimistic about the rate of software delivery.

More generally we also learnt that the original business case had no explicit impacts. There was no statement about how this investment would impact the world and what level of impact would be considered good. We learn that all investments made as a company, both internal and external, should include a description of the impacts and levels we assume we will achieve.

Finally we retrospected on the retrospective. We decided that just doing this at the end of the investment, was too late. We learnt that the review of SPDIR elements and hypotheses should happen much more frequently. The retrospectives will effectively form the basis of a governance structure and steering routines.

About the Interviewees

Ben Williams is a coach, consultant, transformation specialist and author of ‘Fifty Quick Ideas to Improve your Retrospectives’, he helps organizations deliver real business value, consistently and faster. Applying a wide range of tools, techniques, and experiences, Ben coaches teams and leaders to appraise and refine their work systems. Drawing predominantly from agile and lean disciplines like Scrum and kanban, Ben has been involved in driving some radical and large scale agile transitions, working as a coach and servant leader. He is a regular speaker within the UK and internationally and was rated top track talk at Agile Testing Days 2014. Follow Ben on twitter @13enWilliams

Tom Roden is a software delivery coach, consultant, quality enthusiast and author of Fifty Quick Ideas to Improve your Retrospectives and 'Fifty Quick Ideas to Improve your Tests'. Tom works with teams and people to make the changes they need to thrive and adapt to the changing demands of their environment. He collaborates with those intent on delivering high quality software with speed and reliability, supporting ongoing improvement through process and practice refinement, influenced by agile and lean principles. Tom specialises in agile transformation and quality, from management and strategy to practitioner approaches and techniques that help teams and organisations continuously improve. He is a regular speaker within the UK and internationally and was rated top track talk at Agile Testing Days 2014.

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT