BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Q&A with Alison Polton-Simon on Her 'Metrics That Matter’ Talk for DevOpsDays NZ

Q&A with Alison Polton-Simon on Her 'Metrics That Matter’ Talk for DevOpsDays NZ

Leia em Português

This item in japanese

Alison Polton-Simon is a software engineer with ThoughtWorks and a former member of the GoCD Analytics team, where she was involved in assisting developers, operations and business stakeholders make smarter, data-driven, decisions.  

Polton-Simon will be giving a talk titled 'Metrics That Matter' at DevOpsDays NZ in October, where she will share her insights into the key metrics which have proven most effective across organisations in helping them to understand and improve their continuous delivery processes.

InfoQ caught up with Polton-Simon to get a sneak-peek into her current areas of interest and metrics which she thinks we should start measuring.

InfoQ: Can you please tell us about your current role and areas of interest in the DevOps space?

Alison Polton-Simon: I currently work as a software engineer for ThoughtWorks, a global software consultancy. For the past five months, I’ve been a part of a team working to migrate an application performance monitoring company to new build infrastructure. Porting an existing monolithic codebase and build system over to a set of more independent repositories has been a great opportunity to design a new system from the ground-up, with the advantage of already understanding many of the more complicated potential edge cases.  

In terms of technologies and practices, I’m excited about the increasing trends towards Infrastructure as Code. On this project, we’ve configured all of our pipelines with a DSL, which allows us to reap the benefits of readability and reusability commonly found in more traditional software development. We’ve also containerized the majority of our build agents, which allows us to ensure more reliable build environments and provides developers with an easy way to set up a local test environment that’s consistent with the build servers.

Since our customers are the developers we work with day in and day out, I’ve been able to hear directly how impactful these changes have been, which is hugely rewarding. 

InfoQ: Your talk at DevOpsDays NZ is entitled 'Metrics That Matter'.  Are you able to tell us a little about the talk and how you selected this specific set of measures?

Polton-Simon: Prior to my current project, I worked on an enterprise offering for ThoughtWorks’ GoCD continuous delivery server. Our goal was to develop an analytics service that would provide teams with greater insight into how they might improve their build and delivery processes. Since this was a greenfield offering, we began by speaking with a range of continuous delivery practitioners and consultants about the metrics they found valuable in understanding their progress. The talk I’ll be giving is a synthesis of the common themes and pitfalls we heard while conducting these interviews. 

InfoQ: What types of metrics should we be observing?

Polton-Simon: The chief goal of a software development team should be to reliably deliver value to customers. Regardless of their role, everyone had stories of being frustrated by some inefficiency in this daily process.

For developers, this often looked like long cycle times between a commit and a successful build.

For individuals in DevOps roles, challenges could emerge when deploying an application. The metrics we selected -- measures like cycle time and mean time to recover -- quantify these pain points, and allow teams to track and understand their growth.

InfoQ: To what extent is the value of such metrics impacted by the local context of the teams which collect these?

Polton-Simon: We aimed to select metrics that address universal concerns in the field of software development, and should feel relevant to every team member. However, that’s not to say that local context is purely irrelevant. 

One of the most significant ways that local context can impact the value of these metrics is if they’re being collected in an environment in which there’s a lack of faith in their validity, or in the individual or group collecting and analyzing them.

Metrics deemed essential by one team that has identified them as personally relevant, or has faith in the organization’s leadership, can be treated as nonsensical or even toxic by another. 

To take a classic example, teams that identify velocity as personally relevant may reflect at each retro on how they could increase their rate of delivery, while ones that consider this metric meaningless may merely attempt to inflate their estimates to create the appearance of increased velocity. 

Given the potential for metrics-driven approaches to inspire fear and gaming of the system, it’s important to invest energy in establishing trust and a common mission during the early stages of a new initiative. 

InfoQ: How do more traditional, often top-down, KPIs and organisation level metrics fit into this picture?

Polton-Simon: Team-level metrics should be constructed in alignment with an organization’s goals and KPIs. Individuals in organizations with conflicting metrics will find themselves trapped by the tension of competing incentives. Organizational KPIs should provide high-level direction for the company’s priorities, while team-level indicators can be used to drive daily efforts.   

InfoQ: Have you seen any examples of collaboration around delivery and other business metrics in tuning for both local and organisational effectiveness?

Polton-Simon: Aligning local and organizational metrics is often a key part of our work as a consultancy. One tool we’ve found successful for identifying opportunities to improve both local and organizational effectiveness is to have a number of teams get together and map out their individual paths to production. 

In this exercise, steps in the process are annotated with process times, wait times and pain points. By highlighting these three areas, teams can identify their own local areas for growth (for example issues like flaky or long-running tests), as well as areas for improved organizational effectiveness (a lack of cross-team collaboration might result in lots of rework due to miscommunicated expectations). 

Teams and organizations can then identify their top pain points, and identify actions to address them at each organizational level. This exercise facilitates the development of a shared understanding across an organization, and empowers teams to address their chief concerns.

InfoQ: How would you recommend measuring the more subjective or qualitative indicators of success, such as customer happiness or experience?

Polton-Simon: I think there’s two complementary ways to handle these kinds of indicators. One, is to use traditional approaches like the Net Promoter Score, which ask users to rate how likely they are to recommend a particular product or service to a friend, and has a long legacy in business.

The other is to use less traditional, and/or more qualitative, measures to understand your organization’s performance. The important thing is to tailor these secondary measures to your company’s values.

One quirky example I’ve been impressed by is Zappos, which prides itself on excellent customer service and celebrates long support calls when they serve the best interests of the customer. In one widely-reported case, a customer service representative spent nearly 11 hours on the phone with a customer, assisting them with a purchase, but also discussing vacations and childhood experiences -- going above and beyond to make a great impression on a customer.

Tracking novel metrics like these can help reinforce what makes you unique from competitors.

InfoQ: Can you share any good stories of where the collaborative aspects of DevOps culture have been stimulated through the process of measurement and review?

Polton-Simon: Creating culture change is one of the greatest challenges for any organization, but collaboration can often be increased through involving teams in the process of identifying what to measure. 

At one company we worked with, teams rarely communicated, and the extensive regression testing suite (which took 14 hours to run) was unreliable, with some tests always failing. Employees had gotten in the habit of ignoring test outcomes and given up hope that the pipeline could ever be fully green.

To address these technical and cultural issues, the ThoughtWorks team held a three-day workshop to identify the highest priority areas for improvement, and to offer teams a chance to share their processes and pain points with each other. 

In the next step, teams then selected changes they could begin making immediately to address the challenges they had identified. These sessions also offered an opportunity for cross-team collaboration to select a set of higher-level metrics and identify areas for improvement that straddled team boundaries.
    
In the months following, the tests have gone green, and the time for a change to go from commit to deployment has gone down from 18 days to 10. We’ve also seen increased interest in the state of the build pipelines, and more communication across teams to get them back to a healthy state when failures do occur. The teams continue to identify new areas in which progress can be made to reduce the time from commit to deployment.

That said, metrics aren’t a silver bullet or a panacea. Real collaboration and cultural improvement take a long time, and require a serious investment of effort.

InfoQ: What recommendations would you give to those who are early in this journey and want to start measuring?

Polton-Simon: I’d encourage those interested in measuring their progress to take a collaborative and iterative approach. 

Oftentimes when we start on a new project, we’ll begin by having the team map out their process of delivering code from commit to production. As we do this, we’ll ask about any pain points or areas of friction. For some teams, the greatest pain comes early on, with flaky builds and unreliable pipelines. For others, the issues arise later in the process, in the form of long-running integration tests, or frequently-failing deployments.

Once the team has enumerated its pain points, we’ll vote on the one that feels most pressing, determine a way to quantify it, and then identify some quick experiments that might improve it.

I’d encourage teams that want to get into measuring to take a similar process. I’d also encourage them to be comfortable iterating -- if you pick a metric and after a few sprints it no longer feels relevant, repeat the process and try something new.

Continuous improvement is most impactful when it’s tailored to addressing your biggest pain points, and when there’s genuine buy-in.

InfoQ: What are you most looking forward to about DevOpsDays NZ?

Polton-Simon: I’m really excited to participate in and learn from the DevOpsDays NZ community. DevOpsDays are a great opportunity to learn from like-minded practitioners and compare how we’re solving similar problems in different tech stacks, industries and stages of company growth. The last DevOpsDays event I went to (in Denver, Colorado) featured inspiring talks and a great sense of community, and I’m sure this one will do the same. On a personal level, I’m also really excited to explore Auckland, as it’s my first time visiting New Zealand!


DevOpsDayz NZ will be running in Auckland from October 3-4, where Alison Polton-Simon and a number of international and local speakers will discuss a range of cultural and technical topics. 
 

Rate this Article

Adoption
Style

BT