The Rebound Effect: AI’s Silent Backfire
Part 1 of our mini-series on AI x Systemic Impact
Do we really need another take on AI? We think so.
AI is here and the promise is real: exponential efficiency gains, transformative applications, and solutions to problems once thought unsolvable. But every efficiency gain has ripple effects. To understand the big picture, we need to take a systemic view.
We see AI not just as another tool, but as a first-principle technology — a force that can re-shape the foundational dynamics of how we produce, consume, and allocate resources. At the same time, we see a technology that’s deeply energy-intensive, echoing the challenges of the hard-to-abate sectors we’re racing to decarbonise.
So where does that leave us? We believe there is a major blind spot in most AI discourse: rebound effects. Rarely discussed, poorly quantified, yet potentially decisive in shaping AI’s environmental footprint.
Here, we explain why.
Understanding the Rebound Effect
First up, what is it? Rebound effects occur when improvements in efficiency lead to increased overall consumption, offsetting some or all of the expected savings.
The idea goes back to the 19th century, when economist William Jevons observed that more efficient steam engines led to more coal use, not less — a pattern now known as Jevon’s Paradox.
In short: when a solution becomes more cost-effective, it ends up being used more.
This is not speculation — it’s a well-documented phenomenon across sectors, from transportation to data processing to consumer goods.
How do we measure it?
The rebound effect is expressed as a ratio. Suppose a car becomes 5% more fuel-efficient. Ideally, fuel use drops by 5%. But if it only drops 2%, then 3% of the savings disappeared, likely due to driving faster and further than before.
That’s a rebound of 0.6, or 60% of potential savings “taken back” (since (5–2)/5 = 60%).
Here’s how the rebound scale works:
→ Super conservation (RE < 0): savings exceed expectations.
→ Zero rebound (RE = 0): savings match expectations.
→ Partial rebound (0 < RE < 1): some savings are offset (most common).
→ Full rebound (RE = 1): all savings are cancelled.
→ Backfire (RE > 1): use increases so much that overall consumption rises.
Types of Rebound Effects
Rebound effects come in three main forms:
1. DIRECT REBOUND (micro): You use more of something because it’s now cheaper or more efficient.
Example: LEDs use far less electricity than incandescent bulbs. However, because they are cheaper to run and perceived as more environmentally friendly, people tend to install more of them, leave lights on longer, or illuminate spaces that were previously left dark, partially offsetting the energy savings.
2. INDIRECT REBOUND (micro/meso): You save money or time through efficiency and spend those savings elsewhere — often on other energy-consuming services.
Example: A company adopts more efficient cloud servers and cuts its energy bill. It then reinvests the savings into developing new AI features — demanding more computation, more data, more servers.
3. ECONOMY-WIDE REBOUND (macro): Systemic changes across sectors due to widespread tech adoption. Prices fall, productivity rises, demand booms.
Example: AI-driven automation reduces costs across logistics, manufacturing, and services. These gains boost profit and demand, increasing shipping, production, and energy use across the board.
Real World Rebounds: Solar & Transport
Rebound effects in the real world are documented across sectors. Take two examples:
ENERGY MARKETS & INDUSTRY
- In the energy domain, where rebound effects have been most extensively studied, a review of 21 studies found economy-wide rebound effects triggered by energy efficiency gains from 0.1–728% (Brockway, 2021).
- A study of ~8,000 households across the US (2010–2018) found that rooftop solar adoption led to a 28.5% rebound effect; meaning nearly 1/3 of the electricity generated by panels was used to fuel new consumption, rather than offsetting grid demand (Beppler et al., 2023).
TRANSPORT
- An empirical study of ~82,500 U.S. household vehicles using 2009 National Household Travel Survey data found that a 1% increase in fuel efficiency was associated with a 1.2% increase in vehicle miles travelled; supporting Jevons’ Paradox and indicating over 100% rebound (Munyon et al., 2018).
- A meta-analysis of 76 studies and over 1,100 estimates of the rebound effect in road transport found that when vehicles become more fuel-efficient, people tend to drive more — cutting into the expected savings. On average, fuel efficiency improvements led to a 12% increase in driving in the short term, and up to 32% over the long term — a significant erosion of climate benefits over time (OECD, 2016).
These patterns challenge one of the most crucial ideas in sustainability: absolute decoupling.
Decoupling: The Real Goal
Decoupling refers to the relationship between economic growth and environmental pressure. In sustainability terms, absolute decoupling is the gold standard: it occurs when a country’s GDP increases while environmental impacts — such as CO₂ emissions or resource use — decline in absolute terms. For example, if a nation grows its economy by 3% but cuts its emissions by 5% over the same period, it has achieved absolute decoupling. This signals that prosperity is no longer tied to environmental harm.
A less ambitious but still notable form is relative decoupling, where environmental impacts continue to rise, but more slowly than economic output. If GDP grows by 4% and emissions by 1%, that’s relative decoupling: progress, but not enough to reduce total environmental burden.
Where are we at?
Over 30 countries, primarily high-income OECD members like Germany, Sweden, and Japan, have achieved absolute decoupling of CO₂ emissions from GDP growth (Wang, S. et al. 2022; Hausfather 2021). Emissions fell while GDP continued to rise, even when adjusting for offshored production (Ritchie & Roser, 2023).
However, globally, energy use and emissions are still tightly coupled with economic activity. In 2024, energy-related CO₂ emissions grew by just 0.8%, while global GDP expanded by over 3% (IEA, 2025). This is a case of relative decoupling (where energy or emissions grow more slowly than GDP) and it is a sign of progress, but not enough to bring us back within the planetary boundaries.
The picture is even more sobering when it comes to material throughput — the total flow of raw materials (fossil fuels, minerals, biomass, metals) used by the global economy from extraction through to disposal. Since 1970, material extraction has more than tripled, and there is no evidence of absolute global decoupling in terms of material use, land degradation, or biodiversity loss (Bercegol, 2024; Gbadeyan et al., 2024). The material footprint — including embedded imports — continues to rise with economic output (Parrique et al. (2019) and Wiedmann et al. (2015).
AI, decoupling and the systemic view
So where do AI, energy consumption, and decoupling fit together — why are we even talking about this?
Because AI is both a symbol and a stress test of our broader sustainability challenge. On the one hand, it promises dramatic efficiency gains across sectors — from logistics to agriculture to energy optimisation. On the other hand, its rapid growth is driving up electricity demand, hardware production, and emissions. If a technology as powerful and flexible as AI cannot help us reduce absolute environmental harm, it raises serious questions about the viability of growth-led “green” transitions.
The lesson? Relative decoupling is not sufficient for planetary stability. What’s needed is a global shift toward absolute reductions in environmental harm, even as economies and technologies evolve. That’s where rebound effects become an existential issue — especially with general-purpose tools like AI. Increased efficiency alone does not guarantee reduced impact; in fact, it may enable more consumption and emissions elsewhere if underlying systems and incentives remain unchanged.
Our take?
At Planet A, we believe that greentech solutions must address these systemic dynamics head-on. That means supporting innovations that not only promise efficiency gains, but demonstrably reduce environmental pressures in absolute terms — across emissions, resource use, and ecosystem impact. Our investments are guided by this principle: business success must be aligned with scientifically verified reductions in harm. Without that link, even the most impressive technologies risk accelerating the very crises they aim to solve.
To understand why this matters, we need to look at the physical footprint behind AI’s digital promise. Efficiency gains in software often obscure the mounting energy and resource demands of the infrastructure powering it.
AI’s Energy Footprint: A Closer Look
AI systems, especially large language models (LLMs), are enormous energy users as they are powered by vast compute infrastructures:
- Data centers now account for 2–4% of total electricity in countries like the U.S., China, and the EU (IEA, 2024). That’s roughly equivalent to the entire electricity use of a mid-sized country like Sweden.
- Hyperscale centers — massive facilities run by cloud providers like AWS, Microsoft, and Google — can draw 100 MW or more in power capacity, enough to consume upwards of 800 GWh per year if run continuously. That’s comparable to the annual electricity use of hundreds of thousands of homes (IEA, 2024).
- Microsoft and Google reported 20% and 34% increases in water consumption in 2024, largely due to the growing cooling demands of their expanding data center footprints. AI workloads are a key driver of this trend, though not the sole contributor (Crawford, 2024).
The IEA estimates that global data centre demand could triple by 2030. Most of that growth? Driven by AI workloads. And this doesn’t even include the full life-cycle emissions: hardware, cooling, logistics.
Efficiency Gains vs. Demand Growth: The AI Paradox
Compute performance has improved rapidly — but so has demand.
Since 2010, the compute used to train state-of-the-art AI models has increased at an average annual rate of 4.6× (Epoch, 2024a). This growth was especially extreme between 2012 and 2018, when compute demands doubled every 3.4 months — equivalent to a 300,000× increase over six years (OpenAI, 2018). While this explosive phase has since slowed, growth remains very high: from 2019 to 2024, training compute still increased by roughly 2.5× per year (Epoch, 2024b).
Meanwhile, AI chip efficiency (measured in FLOPS/Watt) has only doubled roughly every 2.3 years over the past decade (Prieto et al., 2024 and IEA 2024b). And the foundational pace of semiconductor improvement is slowing: Moore’s Law — the historical doubling of transistor density every two years — effectively ended around 2016 (MIT, 2024). Since then, progress in chip miniaturisation has stalled, with the shift from 14 nm to 10 nm nodes taking five years instead of two.
SO WHAT? This means that even with improved hardware efficiency, compute demand is still outpacing efficiency gains by an order of magnitude or more.
With Moore’s Law no longer driving exponential gains, the only remaining path to more compute is building larger, more energy-intensive systems — exactly what’s happening in the AI arms race.
Moreover, training costs for models like GPT-4 have reached an estimated 52–62 GWh, enough to power 1,000 U.S. homes for 5–6 years (EvoChip, 2024).
Training LLMs is energy-intensive. GPT-3 used 1.3 GWh (The Verge, 2024); GPT-4 used up to 60 GWh (EvoChip, 2024) to train. However, inference — the ongoing energy use once models are deployed — can rival or even exceed training over time. Estimates suggest ChatGPT uses 0.5–1 GWh of electricity per day for inference — comparable to the daily energy use of 20,000–40,000 U.S. households (APPA, 2023).
And as inference costs fall, usage soars. Query prices dropped 280x in 18 months for some models — from $20 to $0.07 per million tokens in just 18 months (Stanford, 2025)
Result: frictionless, low-cost AI services that stimulate exponential demand.
OUR TAKE: Unless capped or redirected, that growth can push us into backfire territory.
What Can Be Done?
If we want AI to be part of the climate solution, rebound effects must be part of the conversation. Here’s how we can begin to manage them:
Public Sector Levers (Kaack et al., 2022, Freire-González 2021):
- Carbon pricing, green taxes, emissions-linked subsidies. The EU ETS and the Green Deal Industrial Plan support low-carbon tech — but AI-specific incentives are still missing.
- Mandates for clean energy and computing efficiency. The revised Energy Efficiency Directive requires reporting from large data centers, and the Ecodesign Regulation applies to digital devices — but binding standards for AI systems are still missing.
- Emissions transparency, model usage labelling. The EU AI Act introduces general transparency rules, but does not yet require emissions disclosures or model efficiency labels.
- Funding for AI models targeting low-impact and climate applications. Programs like Horizon Europe support climate-use cases — but few incentives exist for energy-efficient model design or deployment.
Private Sector Levers (Kaack et al., 2022):
- Internal carbon pricing (incl. cloud emissions).
- Usage caps and retraining limits.
- Prioritising low-impact applications (e.g., logistics, climate forecasting). Encourage use-case screening frameworks that favor applications with clear net-positive environmental outcomes,
- Life-cycle assessment of hardware and emissions reporting.
Where We Go From Here
The environmental footprint of AI depends not just on chips and data centres, but on choices:
What do we build? Who benefits? How do we value long-term resilience over short-term growth?
Will AI optimise supply chains — or turbocharge consumption? Will it help bring us back within planetary boundaries — or simply extend profit margins? The choice isn’t between progress and restraint, but between blind acceleration and deliberate direction.
Even optimistic forecasts underline the importance of active steering: PwC (2024) estimates that global energy use in 2035 could range from 1.1% lower to 0.1% higher depending on how — and where — AI is deployed. If adoption improves energy efficiency at just one-tenth the rate of uptake, AI could offset its own footprint.
In other words, efficiency gains are possible, but far from guaranteed. Even widespread, climate-aligned AI adoption may reduce global GHG emissions by just 0.3% to 1.9% by 2035 — a useful contribution, but no substitute for broader structural action. That gap between potential and planetary need is why deliberate prioritisation matters.
At Planet A, we believe in backing AI that drives real-world decoupling: technologies that reduce absolute emissions, enable circular material flows, or replace carbon- and resource-intensive processes. That means looking beyond narrow performance metrics — like accuracy, latency, or training scale — and asking harder, system-level questions: Does this model actually reduce total energy demand? Does it enable avoidance or displacement of emissions? What physical infrastructure or behaviours does it reinforce or reshape?
This isn’t a call to slow down AI innovation. It’s a call to be honest about trade-offs and intentional about impact right from the start.
For founders: Embed environmental impact goals into your product roadmap, and validate them with scientific tools like life-cycle assessment or marginal abatement metrics — not just compute efficiency.
For investors: Require emissions transparency, support climate-positive use cases, and challenge business models that scale resource use behind the scenes.
In Part 2 of this series, we’ll look at where AI is moving the needle for good — from accelerating Europe’s clean energy transition to making our industrial base smarter, faster, and lighter on the planet.
Authors: Aina Cabrero Sinol, Lena Thiede.