The Models That Shape Our Future
Every major climate policy decision made in the last thirty years rests on a foundation of mathematical models. When the EPA calculates the social cost of carbonâthe dollar value of the damage caused by each ton of CO2 emittedâit relies on integrated assessment models, or IAMs. When the Paris Agreement was negotiated, when central banks stress-tested their exposure to climate risk, when corporations set net-zero targets, the evidence informing these decisions flowed through complex computer simulations that attempt to map the relationship between emissions, atmospheric physics, economic systems, and human welfare.
These models are remarkable achievements. The most widely usedâincluding DICE (developed by Nobel laureate William Nordhaus), REMIND, GCAM, PAGE, and FUNDârepresent decades of collaborative scientific effort. They integrate climate science, economics, and demographics into frameworks that attempt to answer the most consequential question of our time: How much should we spend today to reduce emissions and avoid climate damages tomorrow?
The problem is that these models are constrained by the computational realities of the 1990s and 2000s, when they were designed. They make enormous simplifying assumptions. They treat the world as a handful of regions. They omit extreme weather events. They represent damage functionsâthe relationships between temperature change and economic harmâusing oversimplified equations that flatten the profound complexities of real-world climate impacts. And most critically, they often exclude or drastically underestimate the effects of planetary tipping pointsâthose irreversible threshold events like Amazon dieback, ice sheet collapse, or Atlantic Meridional Overturning Circulation shutdown that could transform the entire calculus of climate economics.
This is not a failure of the modelers. It is a limitation imposed by computational feasibility. To run thousands of scenarios across different emissions pathways, policy assumptions, and technological futures, IAMs must be relatively simple. The cost of computational complexity is accuracy sacrificed for speed. Yet as climate science has evolved and our understanding of cascading risks has deepened, this tradeoff has become increasingly untenable.
Where Current Models Fall Short
The specific ways that today's IAMs fall short tell us why we need a fundamental shift in how we approach climate-economic modeling.
First, tipping points and their economic consequences are drastically underrepresented. A 2023 PNAS study noted that environmental tipping points can "profoundly alter cost-benefit analysis, justifying much more stringent climate policy" than traditional models suggest. Yet most integrated assessment models treat the world as a relatively linear systemâmore emissions lead to more warming, which leads to proportionally larger damages. Real climate dynamics do not work this way. Once critical thresholds are crossed, damages accelerate nonlinearly. Systems flip. Feedback loops amplify. Yet capturing these dynamics computationally has remained prohibitively expensive for most IAMs.
Second, damage functions are oversimplified. The relationship between a given degree of warming and economic impact is assumed to follow a smooth curve, often estimated from historical data that extends only moderately above baseline temperatures. Real climate impacts are not distributed evenly. They devastate some regions while leaving others relatively unscathed. They destroy agricultural productivity in some areas and reshape labor markets in others. They generate cascading failures in supply chains, financial systems, and insurance markets. Yet representing this heterogeneity and these nonlinearities in a computational framework has meant accepting enormous simplifications or running models that take months to produce a single scenario.
Third, spatial heterogeneity is severely limited. Global IAMs use highly aggregated regional representationsâsometimes just a handful of regions representing entire continents. This aggregation obscures the profound differences in climate vulnerability and adaptive capacity across countries, subnational regions, and communities. A model that treats South Asia as a single unit cannot capture the differential impacts on agricultural production in different river basins, the varying exposure to monsoon intensification, or the disparate economic consequences for urban centers versus rural areas. Yet running such detailed simulations across thousands of climate and policy scenarios has been computationally prohibitive.
The practical consequence of these limitations is visible in policy. When the EPA estimated the social cost of carbon at $190 per ton in 2023âa figure that will rise to an estimated $255 per ton in 2025 and potentially $370 per ton by 2050âthis number was embedded with tremendous uncertainty. The range of estimates in the literature spans from under $50 per ton to over $400 per ton, reflecting the fundamental disagreements about damage functions, discount rates, and future productivity growth. This uncertainty has real policy consequences. A difference of a few percentage points in the discount rateâthe assumption about how much we should value harms to future generations relative to present-day costsâcan produce a tenfold difference in the social cost of carbon. The famous debate between economist Nicholas Stern (who favored near-zero discount rates) and William Nordhaus (who favored higher ones) essentially produced two entirely different answers to the same question.
These models are not wrong. They are simply constrained by the computational paradigm within which they were built. And as climate impacts accelerate, as tipping points draw closer, and as policy decisions become more urgent, their constraints have begun to feel not merely inconvenient but potentially dangerous.
The AI Revolution in Climate Science
Artificial intelligence is beginning to dissolve these constraints. Not by replacing traditional climate models, but by augmenting them, accelerating them, and extending their reach in ways that were computationally infeasible before.
Machine learning excels at precisely the problems that have stymied traditional IAMs: detecting nonlinear relationships, identifying tipping points in high-dimensional data, and emulating complex systems at a fraction of their computational cost. A 2024 Nature Climate Change study from researchers across leading institutions demonstrated the power of this approach by generating 30,000 synthetic climate mitigation scenarios using AI-accelerated methodsâa volume of analysis that would have taken years using traditional IAM frameworks.
Deep learning has also proven valuable for early warning systems. PNAS-published research has shown that neural networks can identify precursors to tipping points by detecting characteristic slowing and increased variance in climate dataâsignals that a system is approaching a critical threshold. These early warning systems could provide the signal resolution necessary to update damage functions in real time, adjusting policy frameworks as new information about tipping point proximity emerges.
Perhaps most transformatively, machine learning emulators can replicate the behavior of complex computational components at orders of magnitude lower cost. Rather than running a full climate model to project future temperatures given a specific emissions pathway, an AI emulator trained on thousands of previous simulations can generate the same output in milliseconds. This unlocks new possibilities: regional damage functions that previously would have required thousands of hours of supercomputing can now be run locally. Uncertainty quantificationâtesting how sensitive policy conclusions are to different assumptionsâbecomes tractable. Optimization algorithms can search across millions of policy combinations to identify those that maximize welfare while meeting constraints.
Organizations around the world are already advancing these capabilities. DeepMind has released GenCast, NeuralGCM, and WeatherNext 2âneural network weather forecasting systems that outperform traditional physics-based models. Microsoft's Aurora performs weather simulations 5,000 times faster than traditional forecasting methods. Google has improved solar forecasting accuracy by 40% in India using machine learning. The Defense Advanced Research Projects Agency (DARPA) launched the AI-Assisted Climate Tipping-Point Modeling (ACTM) program, specifically designed to develop AI systems for detecting and characterizing critical transitions in Earth systems. In September 2024, the University of Chicago established its AI for Climate Initiative, bringing together climate scientists and machine learning researchers to accelerate exactly this convergence.
The momentum is undeniable. These are not isolated experiments. They represent a fundamental shift in how climate science generates its most essential outputs.
Reimagining the Social Cost of Carbon
The social cost of carbon is not merely an economic statistic. It is the policy hinge upon which trillions of dollars of climate investment swings. It determines whether a particular climate policy passes cost-benefit analysis. It shapes corporate net-zero targets. It influences how regulators weigh climate considerations against other priorities. And it is built almost entirely on the outputs of integrated assessment models.
Yet the social cost of carbon is also profoundly uncertain, and much of that uncertainty flows directly from the limitations of the models that estimate it. The Resources for the Future estimated the SCC at $185 per ton as of 2023, while the EPA's central estimate reached $190 per ton the same year. These numbers agree superficially but mask vast disagreements about the underlying damage functions, assumptions about technological change, andâmost consequentiallyâassumptions about how to value the welfare of future generations relative to today.
AI can help resolve some of this uncertainty. Better damage functions, estimated from richer climate model outputs and validated against historical data at higher resolution, would reduce the range of plausible SCC estimates. Early warning systems for tipping points would allow economists to update their estimates of tail risks as new information emerges. Machine learning optimization could help identify the portfolio of climate policies that maximizes welfare given our best-current understanding of climate dynamics and economic impacts, rather than relying on simplified assumptions about cost curves.
The stakes are enormous. A social cost of carbon that understates true damages could lead to underinvestment in climate mitigation by trillions of dollars. One that overstates damages could justify economically inefficient policies. Getting this number rightâor more precisely, managing uncertainty about this number in a more sophisticated wayâis one of the highest-leverage applications of AI to climate economics.
From Models to Policy: Bridging the Gap
Smarter models alone will not save the world. The critical challenge is translating improved climate-economic models into better policy.
This translation is harder than it appears. Scientists often assume that better evidence automatically produces better decisions. But policy is shaped by institutions, politics, uncertainty tolerance, and distributional concerns. A more sophisticated estimate of the social cost of carbon will not automatically shift policy unless policymakers understand it, trust it, and see a path to implementing it politically.
This is where the convergence of AI and economics becomes especially powerful. AI can help bridge the evidentiary gap between climate science and policy in several ways. First, it can make complex analyses more transparent and auditable. Rather than trusting a black-box model run by specialists, policymakers can interact with AI systems that explain their outputs in human-readable terms, decompose uncertainty into its constituent sources, and allow for stress-testing under different assumptions. Second, AI can accelerate the feedback loop between modeling and real-world outcomes. As climate impacts occur and are recorded, they can be fed back into ML systems that continuously update their estimates of damage functions and tipping point risks. This reduces the lag between new evidence and policy adjustment.
Third, and perhaps most importantly, AI can democratize access to sophisticated climate-economic modeling. Rather than requiring teams of specialized modelers and access to supercomputers, institutions in developing countries can run sophisticated analyses locally using emulator-based approaches. This is not merely a matter of equity, though it is that. It is a matter of policy realism. The countries most vulnerable to climate impacts must be able to conduct their own climate-economic analyses rather than relying on modeling frameworks developed in wealthy nations that may not adequately represent their specific risks and adaptive capacities.
Why We Built Economics & AI for Earth
These insights crystallized for us during months of research and conversations with leading climate scientists, economists, and AI researchers. We saw the frontier of climate modeling advancing rapidly, but we noticed something troubling: there was almost no institutional attention at the convergence of all three domainsâAI, economic policy, and climate sustainability. Climate scientists were building better models but often without deep engagement with economists. AI researchers were publishing breakthroughs in machine learning but without deep understanding of the policy implications. Economists were building frameworks for climate policy but without access to the latest advances in AI-driven climate modeling.
The gap between these three domains is where the most consequential work remains undone. How should AI-improved damage functions reshape our understanding of climate tipping point risk? What economic policies best incentivize the development and deployment of AI for climate solutions? How can machine learning help us allocate limited climate finance to maximize global welfare? These questions sit at the intersection of all three domains, and they are being asked by no single organization with sufficient depth of expertise.
That realization led us to found Economics & AI for Earth. We are a research foundation built on the conviction that the world does not need more isolated advances in climate science, more theoretical papers on climate economics, or more AI breakthroughs divorced from real-world impact. What the world needs is institutional capacity to integrate these three domainsâto build research programs that sit at their convergence and generate insights that none of them could produce alone.
We have organized our work around five research tracks that span this convergence. Our Climate Economics Modeling program is developing next-generation IAMs that leverage AI to represent tipping points, heterogeneous damages, and extreme uncertainty with previously impossible fidelity. Our Energy & Infrastructure track is analyzing how AI can optimize the deployment of renewable energy, grid flexibility, and adaptation infrastructure across regions with vastly different economic and climatic conditions. Our Biodiversity Valuation program is building AI systems to understand the economic value of ecosystem services and the damages that flow from biodiversity loss. Our Labor & Workforce track is examining how climate impacts and climate policy will reshape labor markets and how AI can help manage these transitions equitably. And our Policy & Markets track is working with governments and financial institutions to translate research insights into institutional practice.
We chose New York as our home because this is where policy, capital, and innovation intersect. We are building partnerships with leading universities, collaborating with government agencies, and engaging with the financial institutions that deploy trillions of dollars of capital in climate-related decisions. We are a 501(c)(3) research foundation precisely because we want to operate with the independence to follow the research wherever it leads, uncompromised by commercial incentives or narrow institutional agendas.
The urgency is real. In 2024, Swiss Re estimated $310 billion in economic losses from natural disasters. Munich Re estimated $320 billion. These were the fifth consecutive year with insured losses exceeding $100 billion. By mid-century, the World Bank estimates that the world stands to lose around 10% of total economic value to climate impacts. Over 1.2 billion people are at high risk. These are not abstract future risks. They are present realities with trajectories that compound decade by decade.
What gives us hope is that the tools to respond are advancing faster than the crises themselves. AI-driven climate modeling, properly designed and rigorously validated, can help policymakers make far better decisions about climate investment, adaptation, and risk management. But only if we build the institutional capacity to connect these advances to the economic policy frameworks that actually determine global allocation of resources.
Economics & AI for Earth exists because this convergence is too important to leave to chance. We are building it because the world's most pressing problemâsecuring a stable climate for generations to comeâdemands the integration of the world's most powerful analytical tools. We are building it because "smarter policies for a sustainable world" is not a slogan but an achievable reality, if we have the courage to pursue it systematically.
The path forward is clear. The technology is ready. The policy windows are closing. And the institutions that sit at the intersection of AI, economics, and climate are the ones that will shape whether humanity responds to this crisis with clarity or chaos.