Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Cloud Provider Sustainability: the Need for a Workload Carbon Footprint Standard

Cloud Provider Sustainability: the Need for a Workload Carbon Footprint Standard

Adrian Cockcroft, tech advisor and former VP for sustainability architecture at Amazon, shared his vision at QCon London on sustainability commitments for cloud providers and the current challenges in determining their supply chain carbon footprint. Cockcroft advocated for a new real-time carbon footprint standard.

Cockcroft started his talk by explaining why sustainability matters. Among goals like regulatory compliance and leaving the world habitable for future generations, he highlighted the importance of green market positioning and employee engagement, factors that are often underestimated. Among the actions an engineer can take now, optimizing code, reducing logging, and measuring carbon emissions are the most significant ones.


Cockcroft described the different GHG emissions: what scope 1, scope 2, and scope 3 are and the main differences between direct and indirect emissions, highlighting the difficulties accounting for the whole lifecycle. He stressed how scope 3 is what we will need to focus on in the future: as the utility grid decarbonizes over time, scope 3 dominates carbon footprints, as it is already today for most data centers in the US and Europe.

Describing how to account for scope 2 emissions, Cockcroft explained the difference between location-based - what a company uses - and market-based - what a company pays for - emissions. While Azure, AWS, and Google Cloud expect to reach 100% green energy in all their regions via power purchase agreements, each of the major cloud providers has made different sustainability commitments, over different timescales, with different reporting methodologies and levels of details.

Scope 3 is mainly about hardware, with CPU and SSD drives as significant contributing factors. For developers, there is no simple way to determine how much energy a specific line of code, workload, or container uses, and cloud providers do not provide information at this granularity.

Running a "DYI Energy Usage Measurement" test for different scenarios, Cockcroft reported huge variances in results and lots of confounding effects, jocking:

If you feel you want to do something to save the world, just use dark mode.

Explaining what the PUE (Power Usage Effectiveness) is and why it is important to determine the efficiency of a data center, Cockcroft noted that while Microsoft and Google publish their PUE (currently around 1.1 and 1.2), Amazon does not, claiming it is complicated and pointing to a 2009 article.

While long-term goals are important, Cockcroft argued that there are actions companies and developers can take today. Cloud providers are still significantly better than a traditional enterprise data center, but there are no huge differences between providers, as they mostly use the same chips and face the same challenges. The region chosen for the deployment is instead more important:

Use any cloud provider but try to minimize the use of Asia regions for the next few years.


He discussed the carbon footprint tools of the three major providers: they can help today with compliance but they all lack critical features, metrics, and granularity to be useful for developers who want to tune cloud workloads for sustainability.

Cockcroft closed the presentation by proposing a "Workload Carbon Footprint Standard", offering the same data for all cloud providers and data center automation tools, with the same resolution as existing monitoring tools, typically minutes:

I just want another metric like CPU utilization, reported via the same tools we already use.

The new standard will show energy and carbon usage, allowing engineers to use the same performance and cost optimization tools used today to benchmark and optimize their workloads.

About the Author

Rate this Article