The Governance Landscape in Flux
The past eighteen months have witnessed a remarkable acceleration in AI governance, creating a patchwork of regulatory approaches across the world's largest economies. The question now is not whether AI will be governed, but howâand whether that governance architecture will serve the dual imperatives of technological progress and climate stability.
The European Union's response has been the most comprehensive to date. The EU AI Act, published July 12, 2024, and effective August 2, 2024, represents the world's first major legal framework explicitly designed to manage AI risks through a risk-based classification system. The Act divides AI applications into prohibited, high-risk, and general-purpose categories, with proportionate regulatory requirements for each. Notably, the legislation includes environmental provisions requiring developers of high-risk systems to document energy consumption and conduct lifecycle assessments. Violations can result in fines up to âŹ35 million or 7% of global turnover, creating powerful incentives for compliance. However, criticsâincluding the Heinrich Böll Foundation, a leading European environmental research organizationâhave characterized the EU AI Act as a "missed opportunity" for more comprehensive climate protection mechanisms. While the environmental provisions exist, they remain secondary to the Act's primary focus on managing AI-related risks to human autonomy and safety.
The United States, by contrast, has shifted dramatically in its approach to AI governance over the past fifteen months. In October 2023, President Biden issued Executive Order 14110, a comprehensive governance directive encompassing over 100 specific actions across 50 federal entities, establishing a baseline for interagency AI oversight and safety research. This represented the most aggressive federal AI governance posture in American history. However, on January 20, 2025, this approach was rescinded by the new administration, which promptly issued Executive Order 14179, "Removing Barriers to American Leadership in AI," on January 23, 2025. The new directive explicitly prioritizes deregulation and the acceleration of AI innovation, signaling a fundamental recalibration of U.S. policy away from precautionary governance and toward market-driven development. This reversal underscores a deeper tension in the American political economy: the balance between managing systemic risks and maintaining competitive advantage in a transformative technology.
The OECD, representing 47 jurisdictions, has sought middle ground through its updated AI Principles, endorsed in May 2024. These principles are non-binding but have proven remarkably influential, with member states adopting over 930 policy initiatives across 71 jurisdictions by May 2023. The OECD approach emphasizes transparency, accountability, and human-centered AI, but like most governance frameworks, it treats climate considerations as a secondary dimension rather than a primary policy objective. This de facto separation of AI governance and climate policy reflects an organizational and conceptual siloing that must be addressed.
The Climate Urgency Case for AI
The climate case for artificial intelligence rests on increasingly solid empirical ground. According to the International Energy Agency, AI applications could yield approximately 1,400 megatonnes of CO2 reductions by 2035 if deployed strategically across energy systems, industrial processes, and carbon management. This estimate, while significant, may understate the actual potential. A 2025 peer-reviewed study published in a leading journal on environmental sustainability estimates annual CO2-equivalent reductions between 3.2 and 5.4 billion tonnes by 2035 when accounting for AI's role in optimization, materials science, renewable energy deployment, and advanced monitoring systems. To contextualize this figure: global greenhouse gas emissions in 2024 totaled approximately 37 gigatonnes CO2-equivalent, meaning AI could theoretically address roughly 9 to 15 percent of current annual emissions through efficiency gains and accelerated decarbonization.
The mechanisms are well understood and increasingly demonstrated in practice. Machine learning algorithms optimize renewable energy grids, reducing curtailment and maximizing output from wind and solar installations. AI-driven materials discovery accelerates the development of battery chemistries and carbon capture technologies. Predictive analytics enable precision agriculture, reducing water consumption and fertilizer use while maintaining crop yields. Advanced monitoring systems track forest degradation, illegal fishing, and methane leaks at unprecedented scales. In the financial sector, AI models can price climate risks more accurately, redirecting capital toward sustainable investments. The technological potential is genuine and substantial.
Yet this potential is shadowed by a significant liability: the computational intensity of AI itself. Data centers consumed approximately 415 terawatt-hours of electricity globally in 2024, accounting for roughly 1.5 percent of worldwide electricity generation. This figure is projected to grow substantially, with estimates suggesting data center electricity consumption could reach 945 terawatt-hours by 2030âmore than a doubling in six years. Training large language models consumes enormous quantities of energy; a single large model can generate several hundred metric tonnes of carbon emissions during its development. This creates a paradox: the technology that might solve climate change is itself becoming a significant energy consumer and emissions source if powered by fossil fuels or if its deployment is not carefully managed.
The resolution to this paradox is not to abandon AI development but to ensure that governance frameworks explicitly internalize the climate implications of both AI's potential and its costs. Currently, they do not.
Where Governance and Climate Collide
The separation of AI governance and climate policy creates three distinct categories of misalignment, each with serious implications for the effectiveness of both policy domains.
The first is temporal misalignment. Artificial intelligence operates on cycles of months and yearsânew models are released constantly, training methodologies evolve rapidly, and competitive pressures drive acceleration. Climate policy, by contrast, operates on decades-long timescales. The Paris Agreement targets are set for 2030 and 2050; carbon budgets are calculated across generations. AI governance frameworks, designed for near-term risk management, largely ignore long-term climate trajectories. The EU AI Act's environmental provisions, for instance, require energy documentation but do not establish mandatory emissions reduction targets or tie AI deployment decisions to climate commitments under the Paris Agreement. Conversely, most national climate policies adopted to date contain little explicit guidance on how AI should be developed, deployed, and constrained in service of climate goals.
The second is functional misalignment. AI infrastructureâdata centers, computing clusters, supply chains for semiconductorsâoperates largely independently from formal climate policy mechanisms. A data center in Virginia is subject to state environmental regulations and potentially federal EPA oversight, but these frameworks were designed without reference to the data center's role in training climate-solution AI systems. This functional separation means that decisions about where data centers are located, how they are powered, and how aggressively they operate are made through market mechanisms and local regulation, not through coherent climate policy. A company might develop a highly effective AI system for optimizing renewable energy grids while powering its training on coal-generated electricity in a jurisdiction with weak grid decarbonization targetsâa scenario that is entirely permissible under current governance structures but arguably counterproductive for climate outcomes.
The third is democratic governance misalignment. There is an emerging risk that narrow AI governance frameworks, focused on a limited set of risks to human autonomy or labor market disruption, could inadvertently narrow the policy options available for climate action. If AI governance becomes overly prescriptive or restrictive before the climate applications of AI have been fully demonstrated and scaled, policymakers may face pressure to exempt climate-focused AI systems from certain requirements, creating regulatory inconsistency. Alternatively, if climate considerations become afterthoughts in AI governance design, we risk entrenching decisions about AI architecture, infrastructure, and deployment that are suboptimal from a climate perspective. The risk is not hypothetical: several major technology companies have already argued that certain environmental reporting requirements under the EU AI Act are excessively burdensome, potentially signaling future attempts to weaken climate-related provisions in AI regulation.
A Framework for Alignment
The path forward requires integrating climate considerations systematically into AI governance while maintaining the innovation momentum necessary to realize AI's climate potential. We propose a framework built on six principles that should guide policymakers as they design and implement AI governance mechanisms.
First: Mandatory Climate Impact Assessments. Following UNESCO's recommendation, all AI systems deployed at significant scale should be subject to climate impact assessments before commercialization. These assessments should quantify the lifecycle emissions associated with model development, deployment, and operation, and should be made publicly available in a standardized format. The assessments should also estimate the potential climate benefits of the system, creating a transparent accounting that allows investors, regulators, and the public to evaluate whether an AI system's climate benefits outweigh its costs. This is not a call for eliminating any AI system that consumes electricity; rather, it is a call for transparency and informed decision-making. The CSIS has advocated for similar approaches in grid optimization, and the precedent of environmental impact assessments in other industries suggests that standardized climate impact assessments are administratively feasible.
Second: Alignment of Incentive Structures. Governance frameworks should explicitly reward AI development that serves climate mitigation and penalize AI applications that undermine climate goals. This can be achieved through carbon pricing mechanisms that apply to data center electricity, investment tax credits for AI systems meeting climate benchmarks, and procurement preferences for government agencies buying AI tools. The signal should be unambiguous: climate-aligned AI development is not merely permitted but actively encouraged. Conversely, AI systems that demonstrably increase emissions without compensating climate benefits should face higher regulatory burdens. This principle does not require identifying winners and losers ex ante; rather, it requires constructing incentives such that the market rewards climate-aligned innovation.
Third: Standard Methodologies for Emissions Accounting. The Brookings Institution has argued persuasively that the lack of standardized methodologies for calculating AI model emissions has created confusion and enabled strategic underreporting. Different organizations use different baselines, different accounting periods, and different assumptions about grid carbon intensity. A global standardâperhaps developed through the International Organization for Standardization (ISO) or the OECDâwould enable meaningful comparison across systems and organizations. This standard should align with the Greenhouse Gas Protocol, the widely accepted framework for corporate emissions accounting, ensuring that AI emissions are integrated into existing corporate sustainability reporting rather than treated as a separate category.
Fourth: Integration with Climate Policy Architecture. AI governance should be explicitly nested within national and international climate policy frameworks. This means that AI development decisions should be informed by national Nationally Determined Contributions (NDCs) under the Paris Agreement, and conversely, that climate policies should account for the role of AI in achieving their targets. The UNFCCC's AI for Climate Action Initiative, launching its 2024-2027 workplan, provides an institutional platform for this integration. National AI strategies should include explicit climate chapters, and climate action plans should include explicit AI chapters. This integration will require coordination across traditionally siloed government ministries and international organizations, but the administrative machinery already exists.
Fifth: Transparent Governance of Frontier AI Development. The most capable AI systemsâthose with the greatest potential for both climate benefit and systemic riskâshould be subject to transparent, participatory governance processes that include climate experts alongside technologists and safety researchers. The WEF's Frontier MINDS Platform, launched to scale AI climate use cases, models one approach to this kind of inclusive governance. Other models might include mandatory climate working groups within AI research organizations, or international committees analogous to the IPCC that assess AI's role in climate mitigation and adaptation. Crucially, these governance processes should be transparent enough that civil society, indigenous communities, and Global South nations can meaningfully participate in decisions about how AI systems are developed and deployed.
Sixth: Just Transition and Equity Considerations. As AI increasingly drives climate solutions, governance frameworks must ensure that the benefits are distributed equitably and that workers and communities displaced by AI-enabled climate transitions have access to support and opportunity. This principle extends beyond labor market concerns to include ensuring that AI-driven climate monitoring and adaptation systems serve the needs of vulnerable populations, that AI development capacity is not concentrated in wealthy nations, and that intellectual property frameworks do not prevent Global South countries from accessing AI climate solutions. The economics of AI governance should not perpetuate the historical pattern in which wealthy nations capture the benefits of new technologies while externalizing their costs.
What This Means
For Economics & AI for Earth, this framework represents both an enormous opportunity and an urgent imperative. Our foundation was established on the conviction that smarter policies could align economic incentives with environmental sustainability, and that artificial intelligenceâproperly governedâis a lever for achieving that alignment at scale. The current fragmentation of AI governance and climate policy is the opposite of smart governance; it is governance by institutional accident rather than deliberate design.
The framework we have outlined is ambitious but achievable. It does not require inventing new international institutions or abandoning existing regulatory approaches; rather, it requires recognizing that AI governance and climate policy are not separate domains but deeply interconnected challenges. Policymakers across the EU, the United States, and the OECD are actively writing the rules that will govern AI development for the next decade. They have both the opportunity and the responsibility to write those rules in a way that aligns AI's immense potential with humanity's most urgent need: a rapid transition to a zero-carbon economy.
Economics & AI for Earth is committed to contributing rigorous research and policy analysis to this challenge. Our Climate Economics Modeling and Policy & Markets research tracks are actively investigating how AI systems can be incentivized toward climate-aligned outcomes, how emissions from AI should be accurately accounted for, and how governance structures can evolve to maintain policy coherence as technology rapidly advances. We are working with policymakers in the United States, the European Union, and emerging economies to implement elements of this framework. The work is urgent, the stakes are high, and the window for getting the governance architecture right is rapidly closing. We invite researchers, policymakers, and practitioners to engage with this agenda.