The Scale of AI's Energy Appetite
The global conversation about artificial intelligence's environmental impact has intensified as data center electricity consumption has become one of the fastest-growing energy demand sectors. According to the International Energy Agency, global data center electricity consumption reached approximately 415 terawatt-hours in 2024, representing 1.5 percent of global electricity usage. This figure encompasses not only AI-specific operations but all data center activity including traditional cloud computing, storage, and enterprise systems. However, the AI-driven subset of this total has grown dramatically, and projections suggest accelerating energy demands ahead.
The trajectory is steep. The IEA projects that global data center electricity consumption could reach 945 terawatt-hours by 2030, effectively doubling from current levels. Goldman Sachs has estimated that data center power demand could increase by 165 percent by 2030, while McKinsey projects even more aggressive scenarios, with some models suggesting total data center electricity demand could reach 1,400 terawatt-hours annually by that timeframe. These projections reflect not merely speculative forecasts but actual capital commitments being made by major technology companies today.
The geographical distribution of this growth matters significantly for climate outcomes. The United States accounts for over 240 terawatt-hours of projected growth through 2030, while China contributes approximately 175 terawatt-hours. Together, these two nations represent roughly 80 percent of the projected global growth in data center electricity demand. This geographic concentration has policy implications: infrastructure decisions in these two countries will substantially determine whether the grid expansion that supports AI is powered by renewables or fossil fuels.
The scale of capital investment required to meet these projections is equally striking. McKinsey estimates that approximately $1.7 trillion in capex will be required for data center buildout by 2030, a figure that reflects the intensity of infrastructure development needed to support projected AI growth. Understanding this capital commitment is essential for policy consideration, as it represents the opportunity for strategic intervention in how that infrastructure is powered and optimized.
Training and Inference: Where the Energy Goes
Not all AI operations consume equal amounts of energy. The energy footprint of artificial intelligence divides into two distinct categories: the one-time cost of training large models, and the ongoing costs of inferenceârunning those models to generate outputs for users. Understanding both is essential for assessing the true scope of AI's resource demands.
Training foundational AI models represents a concentrated, intense energy expenditure. The training of GPT-4, for instance, consumed an estimated 51.8 to 62.3 million kilowatt-hours, generating between 6,912 and 14,994 metric tons of CO2 equivalent emissions. This is notable for its magnitude but also for its growth trajectory: GPT-4's emissions represent approximately twelve times the carbon intensity of GPT-3's training. Smaller models show more modest costs; Llama 3, Meta's 1 billion and 3 billion parameter variants, required 581 megawatt-hours of energy and generated 240 metric tons of CO2 equivalent. The relationship between model size and training energy is substantial but not linear, suggesting that efficiency improvements in training procedures can meaningfully reduce total emissions.
Yet training represents only part of the picture. Inferenceâthe process of running trained models to generate outputs for end usersâaccounts for the majority of data center energy consumption related to AI, and this is where growth is most dramatic. ChatGPT alone processes approximately 2.5 billion queries per day. The energy cost per query varies significantly depending on query complexity: a typical query requires approximately 0.3 watt-hours, while complex queries can demand up to 40 watt-hours. This variance is itself important for policy discussion, as it suggests that architectural choices about how queries are routed and processed have substantial energy implications.
Extrapolating from current inference patterns yields sobering totals. According to analysis by Schneider Electric, if current generative AI usage patterns continue and adoption broadens as expected, all generative AI queries globally could reach a total energy consumption of 347 terawatt-hours by 2030. This single figureâroughly equivalent to the electricity consumption of a major developed nationâillustrates why inference efficiency has become a focus for technology companies and why software optimization techniques have gained urgency.
The distinction between training and inference also informs technology strategy. Training is episodic; a model is trained once and then deployed for inference potentially millions of times. This means that efficiency improvements in inference have compounding benefits over time, while training efficiency improvements benefit only the next generation of models developed.
The Nuclear Bet
Recognizing the scale of energy demand they will face, major technology companies have begun making substantial commitments to nuclear power. These are not speculative investments but binding, long-term contracts with significant financial commitments. This pivot deserves attention both as a strategy and as a signal of how technology companies themselves assess the energy requirements ahead.
Microsoft has agreed to finance the restart of Three Mile Island, a nuclear facility in Pennsylvania, in a $16 billion commitment to secure 835 megawatts of electric generating capacity. The project targets operational status by 2028. Amazon has committed over $20 billion to develop 1.92 gigawatts of continuous power through 2042 via a partnership with Talen Energy. Google has signed agreements with Kairos Power for a fleet of 500 megawatts of small modular reactors, with delivery projected for the early 2030s. Meta has committed to a 1.1 gigawatt power purchase agreement with Constellation Energy spanning two decades.
Collectively, these announced commitments represent over 10 gigawatts of new nuclear capacity specifically earmarked for technology infrastructure. To contextualize this figure: 10 gigawatts represents approximately the generating capacity of 10 large coal plants or 3,000 utility-scale wind turbines. These commitments represent not aspirational targets but actual contractual obligations with substantial financial penalties for failure to deliver or consume the power.
The corporate pivot toward nuclear power reflects several intersecting realities. First, it acknowledges that renewable energy sources, while expanding, cannot reliably meet the baseload power requirements of continuously operating data centers. Second, it represents a bet on nuclear power as a technology capable of decarbonizing energy-intensive industrial processes at scale. Third, and more subtly, it signals that technology companies have internalized expectations of substantial ongoing energy growth and have determined that securing the power supply is a prerequisite for their business plans.
This infrastructure strategy also creates policy externalities worth considering. The success of these nuclear projects will affect not only whether AI companies can meet their own decarbonization commitments but also whether nuclear power demonstrates economic viability as a scalable low-carbon solution for other industrial sectors.
The Other Side of the Ledger: AI for Emissions Reduction
The policy narrative surrounding AI energy consumption often treats the technology as a pure negativeâa source of emissions to be minimized and constrained. Yet this incomplete framing obscures a more complex reality: artificial intelligence also offers substantial potential for reducing emissions across the economy. Examining this potential honestly is essential for balanced policy formulation.
DeepMind, operating within Alphabet, achieved a 40 percent reduction in the energy required for Google data center cooling through AI-optimized control algorithms. This is not a theoretical projection but an implemented reduction in an actual operating facilityâproof that AI can lower its own footprint and the broader footprint of the infrastructure supporting it.
The International Energy Agency has conducted comprehensive analysis of AI's mitigation potential and estimates that artificial intelligence applications could yield 1.4 billion metric tons of CO2 equivalent reductions by 2035. A 2025 study by researchers spanning multiple institutions projected even more substantial potential: 3.2 to 5.4 billion metric tons of CO2 equivalent annual reductions achievable by 2035 through widespread AI deployment in climate mitigation applications. This scale of potential mitigationâseveral times larger than the anticipated data center footprint itselfâsuggests that AI's net climate impact depends fundamentally on deployment decisions.
Where could these reductions occur? The applications are diverse and concrete. AI-optimized smart grids could unlock up to 175 gigawatts of additional transmission capacity, enabling more efficient distribution of renewable energy. Smart grid optimization alone could reduce energy consumption by approximately 15 percent in energy distribution systems. In Copenhagen, an AI-optimized district heating system has achieved 15 to 25 percent energy reduction while saving 10,000 metric tons of CO2 annually.
Agriculture represents another substantial opportunity. CropX, an AI-powered irrigation management system, has demonstrated 25 to 50 percent water reduction in agricultural applications. While water efficiency and electricity are distinct commodities, agricultural water pumping represents significant energy consumption, and efficiency gains compound. Building management systems optimized through AI have achieved 21 percent energy reduction and 35 percent CO2 reductions in analyzed U.S. office deployments.
These applications share a common characteristic: they optimize existing infrastructure rather than replacing it. An AI system that reduces building energy consumption by 21 percent requires no new infrastructureâonly better management of what already exists. This distinction is important for understanding AI's potential role in rapid emissions reduction within physical constraints.
The Net Impact Calculus
Assessing AI's net climate impact requires honestly weighing the energy costs documented above against the emissions reduction potential. The arithmetic is striking: the International Energy Agency estimates that AI's mitigation potential of 1.4 billion metric tons of CO2 equivalent by 2035 is approximately three times the total direct emissions from global data centers.
This finding does not minimize data center emissions; it contextualizes them. If global data center emissions reach 250 million metric tons of CO2 equivalent by 2030 while AI applications elsewhere eliminate 1.4 billion metric tons annually, the net impact is substantially positive. However, this potential reduction is contingent on deployment, and the IEA notes explicitly that currently there is "no momentum" for widespread adoption of these beneficial applications.
The gap between potential and actuality represents a policy problem. The emissions-reducing applications of AI require either substantial business model changes, regulatory mandates, or both. A building management system that could reduce energy consumption by 21 percent must still be purchased, installed, and maintained. A smart grid optimization system must be integrated with existing utility infrastructure and regulatory frameworks. These are not technical problems alone; they are economic and governance problems.
Goldman Sachs has estimated that fossil fuels will likely supply 60 percent of the incremental data center electricity demand through 2030. This projection reflects both the geographic concentration of growth in regions with less renewable energy availability and the ongoing cost advantages of fossil fuel generation. If accurate, this estimate suggests that the infrastructure buildout supporting AI will, on balance, increase fossil fuel consumption rather than decrease it.
The policy challenge, therefore, is not whether AI's energy consumption is largeâit clearly isâbut rather how to structure investments, regulations, and incentives such that the technology's significant emissions reduction potential can be realized while its own energy footprint is minimized and decarbonized.
Efficiency Gains: The Countervailing Force
A narrative focused purely on growing energy demand overlooks a crucial countervailing trend: the dramatic efficiency improvements in AI software and hardware that have occurred even as AI capabilities have expanded. These improvements matter not only for their current impact but as evidence that the energy-demand trajectory need not be deterministic.
Software efficiency has improved at a remarkable pace. Between May 2024 and May 2025, researchers achieved a 33-fold reduction in energy consumption per AI prompt through optimization techniques alone. This is not marginal improvement; a 33x reduction in energy per unit of work fundamentally alters the energy demand equation. To illustrate: if inference efficiency improves by a factor of 33 while the number of queries triples through 2030, the net energy increase is negligible.
Quantizationâthe process of reducing the precision of numerical representations in AI modelsâhas proven to be an exceptionally promising efficiency mechanism. Converting models from full-precision floating-point (FP32) representation to lower-precision integer representation (INT4) can yield 60 to 80 percent energy savings. More broadly, quantization approaches have achieved up to 44 percent energy reductions across diverse model architectures. These are not emerging techniques but increasingly standard practices in model deployment.
Task-specific optimizationâtailoring model architectures and parameters to particular applications rather than deploying universal modelsâhas achieved energy reductions up to 90 percent in specialized applications. While this approach sacrifices flexibility for efficiency, in domains where the task is stable and well-defined, it offers substantial gains.
The trajectory of these efficiency improvements suggests that the relationship between AI capability growth and energy growth need not be linear. As models become more capable, they also become more efficient when properly optimized. The historical pattern in computingâthe trajectory captured by Moore's Law and subsequent improvementsâsuggests that algorithmic and hardware efficiency improvements can offset growth in compute demand for extended periods.
Water Consumption: An Often-Overlooked Dimension
While electricity dominates policy discussions around AI infrastructure, water consumption represents another environmental impact dimension worthy of attention. Data center cooling accounts for substantial water consumption in regions where water stress is already significant.
U.S. data centers consumed approximately 228 billion gallons of water annually, comprising 17 billion gallons of direct water usage and 211 billion gallons of indirect cooling water. Google reported direct water consumption of 6.4 billion gallons in 2023, with data center operations accounting for 95 percent of that total. At the inference level, each ChatGPT query requires approximately 519 milliliters of water for cooling purposes.
Water consumption connects to energy consumption through cooling infrastructure design choices. Data centers located in regions with abundant freshwater or coastal locations can implement evaporative or seawater cooling at lower energy cost and with less impact on freshwater systems. Strategic placement of data center infrastructure therefore has water implications distinct from but related to energy implications.
As data center growth accelerates, water-aware infrastructure planning becomes increasingly important. In water-stressed regions, data center development requires either substantial investment in water-efficient cooling technologies or explicit trade-offs between water security and computing infrastructure needs.
Looking Ahead
The full picture of AI's environmental impact is neither a simple story of technological disaster nor of unambiguous environmental benefit. Instead, it presents a complex policy landscape where significant risks and substantial opportunities coexist.
The energy demands of AI are genuine and growing. The projections for data center electricity consumptionâpotentially doubling by 2030ârepresent a material environmental commitment. Technology companies' substantial investments in nuclear power reflect the seriousness with which they view this challenge, but also underscore that renewable energy alone, at current deployment rates, cannot meet the demand being created.
Yet the potential of AI to reduce emissions across the economy is equally genuine. A threefold difference between potential mitigation and data center emissions suggests that the net climate impact of AI is fundamentally determined by deployment decisions and policy frameworks, not by the technology itself.
The work ahead, therefore, requires parallel efforts on multiple fronts. First, accelerating the decarbonization of energy infrastructure globally, particularly in regions experiencing data center growth. Second, maximizing efficiency in AI systems through continued software and hardware innovation. Third, systematically deploying AI applications for emissions reduction across energy systems, agriculture, buildings, and other sectors where potential has been demonstrated. Fourth, establishing policy frameworks and market mechanisms that incentivize beneficial AI applications while creating accountability for energy consumption.
At Economics & AI for Earth, we believe that navigating this landscape requires moving beyond both techno-optimism and categorical pessimism. The technology itself is neutral; its environmental impact depends on how it is powered, how it is optimized, and to what purposes it is deployed. Our mission is to develop and communicate the evidence and analysis that policymakers need to make these decisions wiselyâto pursue smarter policies for a sustainable world where artificial intelligence serves as a genuine tool for emissions reduction rather than merely another source of energy demand.