When the Fed Meets in Silicon
How AI Is Turning Monetary Policy into a Macroeconomic Experiment
What if Macroeconomists Could Finally Run Experiments?
That is the startling implication of a new paper by Tara Sinclair and Sophia Kazinnik, titled “FOMC in Silico: A Multi-Agent System for Monetary Policy Decision Modeling.” Using large language models (LLMs), they built a synthetic FOMC—an ensemble of AI-generated versions of Jerome Powell, Michelle Bowman, Chris Waller, and others—capable of debating, dissenting, engaging in water-cooler talk, and ultimately voting on monetary policy.
To be clear, we already have DSGEs and VARs for simulating macroeconomic outcomes under different shocks or policy rules. What Sinclair and Kazinnik are experimenting with is the policymaking process: how a committee of imperfect humans, armed with noisy data and facing political pressure, reaches decisions. These decisions inform what shocks and policy rules should be used in macroeconomic models and thereby make true macroeconomic experimentation possible.
This new possibility of experimental macroeconomics was the focus of this week’s Macro Musings podcast with Tara Sinclair. Below, I outline some of the possibilities this new tool opens up. But first, check out this video clip from the episode, where Tara explains how the simulated Fed officials interact.
Simulating the FOMC: What We Learned and What Comes Next
To understand the possible uses of this new tool, we first need to see what Sinclair and Kazinnik actually built. In “FOMC in Silico,” they design a synthetic FOMC composed of AI versions of Powell, Waller, Bowman, and others, each trained on recent speeches and public data to capture their real-world personalities and views.
These “Sim” policymakers deliberate in meetings that mimic the real FOMC process—complete with agenda-setting, district perspectives, and even pre-meeting chatter—before voting on interest rates. Tested on the July 2025 meeting, the simulated committee produced outcomes within the actual policy range and behaved plausibly under stress. Weakening the chair’s authority and introducing political pressure led to more dissent without changing the median decision, while revised jobs data shifted the group slightly more dovish. The result is a working proof-of-concept that deliberation, data, and institutional design can all be modeled in silico.
The potential applications are vast. Researchers could use this framework to experiment with leadership styles—asking, for instance, how a different chair might steer discussion or shape consensus—or to test how the timing of meetings relative to key data releases affects decisions. It could also explore institutional design questions, such as whether the rotation of regional presidents matters, or how meeting structure (who speaks first, how agenda items are framed) influences dissent and convergence.
Another rich line of inquiry is historical and counterfactual analysis. By training bespoke models without access to post-event information, researchers could reconstruct past committees—say, the Nixon–Burns or Volcker FOMCs—and simulate how they might have behaved under different data, shocks, or political pressures. Similarly, an “all-star” virtual FOMC of past policymakers could be tested against today’s economy, producing new insights into how institutional memory and leadership philosophies matter for stabilization policy.
Beyond governance, the model could serve as a sandbox for communication and transparency. Simulated versions of the Fed could help policymakers evaluate whether too much or too little public signaling enhances consensus or breeds confusion. It could even test how markets might react to different communication regimes, by comparing simulations that rely solely on public data with those augmented by internal forecasts or confidential information.
A Future for Macroeconomic Experiments
All of this points toward the possibility of a true macroeconomic experiment—one that links human-like decisionmaking to formal modeling. Step one is to use large language models to create a synthetic FOMC, capable of simulating the deliberative process, testing alternative institutional setups, and generating counterfactual policy paths under different information or political conditions. Step two is to plug those simulated policy paths into our macro models—DSGE, VAR, or HANK frameworks—to see how the economy would respond. In doing so, we close the loop between behavior and outcome, between the messy psychology of policymaking and the structured dynamics of macroeconomics. For the first time, we can experiment not just with the shocks that hit the economy, but with the decision processes that determine how those shocks are managed.
P.S. Yes, another implication of this work is that AI could eventually take over much of the FOMC’s work. I considered this possibility in an earlier post and on a separate podcast, but it’s not what Sinclair and Kazinnik are arguing here. Still, AI-driven structural change is coming for the Fed and other central banks, just as it is for many other industries. It is worth thinking through the implications of that possibility, too.


