In the inaugural episode of CIX Conversations, Sophia Wu sits down with Stephen Matthew, Head of Data Strategy and Engineering at CIX, to explore one of the most contested questions in sustainability today: is artificial intelligence an accelerator for climate progress, or a liability? The conversation covers the real environmental costs of generative AI, where the technology is already delivering measurable impact, and what it means to use AI responsibly in a world running out of time.
Two of the most powerful forces shaping the world right now – climate change and artificial intelligence (AI) – are accelerating simultaneously. Whether they reinforce or undermine each other is not a theoretical question. It is one of the defining trade-offs of this decade.
Why this moment feels urgent
Sophia: When we talk about AI and sustainability today, what makes this feel like a particularly timely conversation?
Steve: I’d frame it along two dimensions. The first is what I’d call the physics of the moment. Climate change no longer feels hypothetical or distant. We’re living through it. In 2023 and 2024, global temperature records were broken. Looking at the Paris Accord and the 1.5-degree ambition, we’ve already seen a 12-month period where we broke through that threshold. The long-term average is now tracking around 1.2 to 1.3 degrees above pre-industrial levels. On current policies, the consensus puts us at around 2.5 degrees by end of century, which is roughly 2 degrees by mid-century. That’s a period many of us will live through. The climate problem isn’t on pause.
The second dimension is the investment and capital picture. There’s a massive wave of interest in generative AI right now, and with it, enormous investment from technology companies into expanding data centre footprints. Infrastructure that requires huge amounts of energy, water and resources. What makes this moment feel urgent is that collision: an accelerating climate picture running alongside a wave of investment into very energy and resource-intensive systems.
Sophia: So when we refer to AI in this conversation, let’s try to be specific. When we talk about generative AI and LLMs, those are really the areas where there’s been a lot of interest lately, right?
Steve: Exactly. AI isn’t really one thing – it’s many techniques. Some have been around for quite some time and fall under traditional machine learning, which takes large amounts of data and finds patterns and correlations. That’s what powers credit scoring, fraud prediction, your Spotify playlist algorithm.
But then you have generative AI: your ChatGPTs, Geminis, Claudes. Unlike traditional machine learning, which predicts or classifies, generative AI is capable of generating new content – text, video, images, code, analysis. Rather than working behind the scenes quietly optimising systems, it’s front and centre, conversational and creative. And the scale difference matters. Traditional ML models might run to millions of parameters. Generative AI models run to billions or trillions. The amount of energy it takes to train and run them, combined with the sheer breadth of use cases they’re applied to, is what drives the resource intensity.
“It’s that kind of collision of an accelerating climate picture coupled with this wave of investment into very energy and resource-intensive systems. That’s what makes this moment feel urgent.”
The real cost of generative AI
Sophia: What is the actual magnitude of the environmental impact we’re talking about? Do we have a clear picture today?
Steve: It’s genuinely hard to put an exact figure on, partly because there’s a transparency issue. Companies aren’t fully disclosing these impacts yet. But if you break it down into three dimensions – energy, water and materials – the picture starts to come into focus.
On energy, AI is compute, and compute requires energy. By some estimates, the additional energy demand from AI by 2030 will be equivalent to India’s entire annual energy consumption. What makes that particularly striking is that this is happening on top of existing decarbonisation goals. We’re already trying to electrify transportation, heavy industry, cooling and appliances. AI is adding a significant new layer of demand onto a grid that was already under pressure.
On water, data centres are packed with GPUs and servers that require cooling, and cooling requires water. Estimates suggest that AI workloads by 2030 will consume water equivalent to the entire drinking water consumption of the United States. These are not small numbers.
And then there are the materials. Rare earth metals, cobalt, copper, nickel, lithium – these are the same inputs we’re already trying to scale supply chains for to support decarbonisation. The electrification of the global economy requires the exact same minerals that AI infrastructure is consuming. There’s a competing stressor on something that’s already a challenge.
Sophia: Something I found interesting in trying to understand the full picture is that direct disclosure from AI companies is still limited. But looking at proxy indicators, for example, demand for NVIDIA chips shows a hockey stick trajectory that gives you a sense of where the resources being funnelled into AI are heading.
Steve: That’s a good way to frame it. And there is a more optimistic thread here. There’s a real drive to make models more efficient, smaller models that can run on a phone or a laptop rather than a centralised server. That alignment of commercial incentives and environmental benefit is genuinely encouraging. The counter-argument, of course, is Jevons paradox: the more efficient something becomes, the more it tends to get used. So efficiency gains don’t automatically translate into lower overall consumption.
Where AI is already making a difference
Sophia: Let’s look at the other side of the ledger. I came across examples that surprised me – some going back to 2018 and 2019 – where AI is already delivering real environmental impact. Global Forest Watch uses AI to map and detect deforestation in real time. The Wildlife Conservation Society uses it in camera traps to identify animals across footage no human team could review at that volume. Smart grids are using AI to monitor energy distribution and reduce waste. And these are all traditional machine learning applications. Am I correct to say that these are all using machine learning and not LLMs, right?
Steve: That’s right, those predate the current boom and drive now.
Sophia: Talk to me a little bit about what kinds of solutions generative AI and LLMs can enable.
Steve: You’ve summarised a really good range of use cases there – all using traditional machine learning approaches. The question is where generative AI and LLMs can add to that picture. Maybe if we point to one or two coming out of research labs and applied to industrial processes, because I think that’s illustrative of where we might find the bigger solutions arising. The most compelling example I’ve come across is from a research group at MIT looking at cement and concrete. Cement is one of the most emissions-intensive materials we produce. The MIT team pointed an LLM at hundreds of thousands of scientific papers, guided it with specific chemical criteria for what they were looking for, and it very quickly surfaced a shortlist of highly plausible candidate materials. Something no human team could have done at that speed or scale. The answer had been buried in existing research. It was just a case of surfacing it. And there are many problems like that.
There are also use cases closer to home for us. In the voluntary carbon market, you can already see generative AI appearing across the project lifecycle: in the authoring of project design documents, in due diligence, in digital MRV, in pricing and ratings. Working in this space, you’re dealing with a lot of unstructured data. Key information buried in lengthy PDFs that takes significant time to review. LLMs are very good at synthesising large volumes of that kind of material quickly and accurately.
Sophia: So…I put the central question of this episode directly to ChatGPT — whether AI and LLMs can play a significant role in accelerating climate solutions. I just want to read out the response because I thought it was quite good: “Yes, recent developments in large language models and generative AI can play a significant role in accelerating progress towards mitigating climate change, but the impact is indirect in most cases. They act as force multipliers for human expertise and systems rather than as standalone climate solutions.”
It then gave three specific examples: the ability to rapidly synthesise climate research, the ability to generate better climate models and interpret their results, and democratising access to climate expertise for small teams, NGOs and startups that can’t afford to hire a dedicated expert. But it also added an important caveat – LLMs and GenAI are only enablers, not substitutes for direct climate action such as renewable energy deployment and fossil fuel phase-outs.
Steve: “Force multiplier” is a useful frame. It shifts the question away from whether AI is good or bad and toward whether the humans directing it are doing so well. The technology doesn’t determine the outcome. The choices around how and where it’s deployed do.
Using AI responsibly
Sophia: For sustainability leaders inside organisations, how would you advise the use of responsible AI?
Steve: It’s a complex question because responsibility in the context of AI spans ethical and social dimensions too – there’s the use of people’s data, the use of data belonging to organisations. But focusing on the environmental dimension, which is closer to our wheelhouse, you can approach it top-down or bottom-up.
Top-down is the responsibility of the consumer: the organisation using AI. The first question is simply: can you measure your footprint? At the moment, the honest answer is largely no. Emissions reporting around AI usage sits at the level of the AI provider or the data centre fleet. You can’t get to individual-use granularity yet. But I’m reasonably confident that will change – I saw the same pattern with cloud computing. At the start, AWS, Azure and Google Cloud had none of these tools either. Consumer demand drove them to appear, and I think the same will happen with AI.
For the decisions you can make today, I always come back to the Green Software Principles from the Green Software Foundation – a framework for thinking about responsible AI and software usage. The first question they raise is the most impactful: do you actually need to use AI for this particular process? Probably the hardest one to persuade on, because there’s so much interest and so much incentive to try and do something with this tool – but it’s the one with the biggest impact. If you are going to use it, the framework points to practical levers: run workloads in regions where the grid is less fossil-heavy, and time-shift processes to when renewable supply is higher. These are only available if you’re running your own infrastructure rather than calling out to a hosted model, but for organisations with that control, they’re meaningful.
Bottom-up is the responsibility of the providers. Most organisations are essentially delegating their environmental cost to cloud platforms. The ambition across those providers is genuinely high – net zero commitments, 24/7 carbon-free energy goals. But some of the most prominent AI-invested companies are already seeing operational emissions rise because of this infrastructure drive. Not a rollback from commitments, but those commitments becoming increasingly hard to honour as the compute footprint expands.
Lever or hindrance?
Sophia: Given everything we’ve discussed, here’s the big question. Do you think AI is a significant lever or a hindrance to our progress on climate?
Steve: I think I’m going to sidestep that. Maybe the idea of whether it’s a hindrance or a lever is almost secondary. The way I think about it is that AI is essentially an accelerator to the system in which it’s deployed. You introduce AI into a system, and it’s going to accelerate that system in some way. So the real question is: to which system is AI going to be deployed? And is that system going to be a net benefit or a net negative towards our goals on climate? We know that fossil fuels still attract huge amounts of investment and are still very powerful. If AI is directed towards enhancing fossil fuel extraction, discovering new fields, new oil fields, then it may well be a net negative. But if we can direct AI towards our climate goals, then it’s a different story — and it could well be a net lever in that case. So that to me is the real question. AI could be a lever, it could be a hindrance, and that really comes down to which system we choose to deploy AI into.
Sophia: And that is very much still up to humans, right? That’s up to people to decide where that effort goes.
About the speakers
Sophia Wu is a Pricing Analyst at Climate Impact X (CIX), where she focuses on carbon market intelligence and pricing analysis. Prior to CIX, she worked as a freelance writer, and before that as a Product and Carbon Partnerships Associate at Patch.
Stephen Matthew leads Data Strategy and Engineering at CIX, where he oversees data infrastructure, analytics and platform integrations. He has over 20 years of experience in tech and data across financial institutions and climate tech startups.
Disclaimer
This podcast is produced by Climate Impact X Pte. Ltd. (“CIX”) and is intended solely for general information. The views, opinions, and recommendations expressed by the host and guests are their personal views and do not necessarily represent the views of CIX, its management, or its affiliates.
The content in this podcast does not constitute legal, financial, investment, or professional advice and should not be relied upon for any decision making. Listeners should seek independent professional advice tailored to their specific circumstances.
While efforts are made to ensure information is accurate as of the recording date, CIX makes no representations or warranties regarding the completeness, accuracy, or reliability of the information discussed.
CIX shall not be liable for any loss or damage arising from reliance on the content of this podcast. By listening to this podcast, you agree that you do so at your own discretion and risk.












