AI poses major challenges for green IT: each query consumes vast energy, GPU chips last only 2-3 years, and costs stay hidden from users. Regulatory frameworks like the EU AI Act fall short on enforcement, Ludi Akue said at QCon London. In her talk What I Wish I Knew When I Started with Green IT, she presented solutions such as model compression, quantization, and novel architectures, using sustainability as a design constraint.
Green IT focuses on reducing IT’s environmental footprint, by rethinking how you build, deploy, and power IT systems, Akue mentioned. This includes everything from energy-efficient hardware and optimized code to data center design and software architecture decisions that minimize resource use.
AI is creating unprecedented environmental challenges without corresponding governance. Generative AI requires constant and significant computational power even for inference time, that is, usage time, when you prompt the system. This translates directly to energy consumption, Akue explained:
Every time someone uses a chatbot, generates text or images, or triggers any generative AI feature, it consumes enormous amounts of energy. Constant inference is becoming a massive energy sink as AI features proliferate.
Due to the usage of AI, hardware churn is accelerating. GPU chips typically last only 2-3 years, Akue argued.
AI is not free, but users are shielded from the costs, Akue mentioned. There’s a complete lack of transparency; users don’t see the environmental costs behind each query, so there’s no natural restraint.
What’s most concerning is how teams deploy AI features without questioning if they’re truly necessary, Akue explained:
We’re scaling by default rather than by design, and even Europe’s first regulatory frameworks, such as the EU AI Act, fall short on these sustainability impacts by only considering energy consumption, with no clear enforcement mechanisms.
According to the literature, reducing the impact of artificial intelligence on the environment will have to come from improvements in model compression, quantization, novel architectures, and hybrid strategies. Akue gave some examples: Retrieval-Augmented Generation, Small Language Models, and combining offline with online inference. But technology alone is not enough, she said.
We have been designing digital systems for a stable world, a world that no longer exists. Today, the climate crisis is reshaping our assumptions about scale, cost, and resilience, Akue argued. It is time to rethink our assumptions.
Sustainability is not just an opportunity to redefine technology. It is a constraint we must now build within, just like latency or scalability, Akue concluded.
InfoQ interviewed Ludi Akue about the impact of AI on the environment.
InfoQ: What have you done to reduce the impact of AI on the environment?
Ludi Akue: At Bpifrance, we are working to make the environmental costs of AI more visible and manageable. Through our Green IT working group, we have been exploring tools like Ecologits, LiteLLM, and Langfuse to help teams monitor, budget, and steer AI usage more effectively.
We are also focusing on building a cultural shift within the organization. With my PromptSage GPT project, I am building AI literacy, helping teams understand that thoughtful prompting is not just about better results; it is also a sustainability practice. Using AI more intelligently reduces unnecessary computation. This is why I encourage product teams to take ownership of their usage impact and to embed environmental telemetry into the products.
InfoQ: What’s needed for driving successful green IT transformations?
Akue: We already have many of the technical solutions. What we often lack is the shift in perspective. Start by asking different questions. Not just "Can we build it?" but "Should we?" Not just "How fast?" but "At what cost?" These questions open the door to more sustainable decisions.
To make this shift real, we need three key qualities: the curiosity to explore beyond our comfort zones, the honesty to acknowledge our real impact rather than externalizing costs, and the courage to ask uncomfortable questions of our providers, our vendors, and ourselves. We have to unlearn to learn.
InfoQ: How have things changed since you gave your talk at QCon?
Akue: At QCon London 2025, I argued that the majority of AI’s environmental impact reduction would come from model creation improvements (compression, quantization, and novel architectures) while inference-phase interventions (AI literacy, appropriate model selection, inference budgeting, compute constraints, and AI governance integrating FinOps and GreenOps) represented necessary but secondary work. This assessment has proven incomplete.
Emerging evidence indicates that for massively deployed consumer AI services, cumulative inference energy consumption can exceed one-time training costs within months of deployment. Technical efficiency gains from compression and quantization have materialized, typically delivering 2-4x improvements in inference efficiency. Available data suggests these improvements have coincided with exponential growth in inference volumes rather than absolute consumption reductions. Publicly documented adoption of inference governance mechanisms remains rare.
The pattern is consistent with rebound effect predictions: efficiency improvements enable expanded deployment and new use cases rather than reduced overall consumption. The critical insight: technical optimizations deployed without complementary inference-phase governance mechanisms - budgeting, decision frameworks, appropriate model selection - risk accelerating rather than mitigating environmental impact through broader AI adoption.