The MATS Strategy & Forecasting Track supports research on the long-horizon questions advanced AI raises, from AI timelines and geopolitical competition to institutional futures in a post-AGI world. As capabilities accelerate, key decisions about advanced AI are being made under deeper uncertainty, often before strong empirical evidence or broad consensus has emerged. Addressing these challenges requires structured forecasting, scenario analysis, geopolitical modeling, and macro-strategic thinking.
This track is focused on macro-strategy: understanding how the transition to advanced AI will unfold, how institutions and societies may adapt to highly capable AI systems, and what actions taken today can most improve outcomes in the long run. Some streams center on structured forecasting, particularly quantitative work on capabilities, timelines, compute, and economic impact. Others emphasize scenario analysis and modeling, including frontier lab dynamics, geopolitical competition, state behavior, transition scenarios, and tabletop exercises.
These forecasts and models may be used to support analysis of what policy options are available to steer the path to AGI or the post-AGI future, to describe the costs and benefits of these options, and to raise awareness of how choices being made today could expand or narrow the range of options available to future policymakers. Research in this track might explore how, why, when, and where advanced AI will prompt rapid changes in industrial production, military tactics, and general scientific research, as well as the second-order effect of these changes on geopolitics, democracy, and capitalism.
Fellows in this track need to be comfortable with uncertain inference, probabilistic claims, and writing clearly about questions where the evidence base is limited. Experience with or interest in interdisciplinary research is helpful, as many of the research questions in this track ask how changes in one area of society will affect behavior in other fields. Strong candidates have come from forecasting, economics, history, philosophy, political science, international relations, computer science, security studies, and quantitative social science, among other backgrounds.
Fellows are matched to mentors based on fit, and projects are scoped to produce concrete artifacts by program end, e.g., forecasting reports, scenarios, policy memos, strategic analyses, and peer-reviewed research. Target audiences include lab strategy and policy teams, AISI staff, national security analysts, the funders and policymakers making long-horizon decisions about advanced AI, and the broader forecasting, governance, and AI safety communities.
We are interested in mentoring projects in AI forecasting and governance. This work would build on the AI 2027 report to either do more scenario forecasting or explore how to positively affect key decision points, informed by our scenario.
We will have meetings each week to check in and discuss next steps. We will be consistently available on Slack in between meetings to discuss your research, project TODOs, etc.
The most important characteristics include:
Also important, but not required characteristics include:
We will talk through project ideas with scholar
We are interested in mentoring projects in AI forecasting and governance. This work would build on the AI 2027 report to either do more scenario forecasting or explore how to positively affect key decision points, informed by our scenario.
We will have meetings each week to check in and discuss next steps. We will be consistently available on Slack in between meetings to discuss your research, project TODOs, etc.
The most important characteristics include:
Also important, but not required characteristics include:
We will talk through project ideas with scholar
This mentor also has a stream in the Biosecurity track.
This stream focuses on how advanced AI could enable new and dangerous physical technologies, and on assessing when risks become tractable or urgent as those capabilities arrive.
Half-hour one-on-one weekly meetings by default, with the option to extend or add ad-hoc calls when useful. I'm active on Slack and typically respond within a day for quick questions. I'm happy to read drafts and leave written feedback async between meetings.
Essential:
Preferred:
I'll talk with the fellow about what they're interested in, and we'll pick a broad area together from a few directions I'd want to pitch. From there we'll work together to scope something sharp and well-defined, with me leaning on my sense of what's tractable and high-value. The fellow then runs with the project, and we adjust as it develops.
AI macrostrategy: strategic questions about how the transition to advanced AI will happen, and what we can do now to prepare for it.
Topics of interest include better futures, power concentration, takeoff speeds, deals with AIs, space governance, and acausal trade.
Each scholar will be assigned a primary mentor who will meet with them once a week. The specifics will depend on the candidate and project.
We’re looking for people who:
It’s a bonus if you already have research experience, or have domain knowledge in a relevant field like philosophy or economics.
For project ideas, see here
The MATS Program is a 10-week research fellowship designed to train and support emerging researchers working on AI alignment, transparency and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, field-building and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from September 28th to December 4th, with the extension phase for accepted fellows beginning in December.
MATS accepts applicants from diverse academic and professional backgrounds - from machine learning, mathematics, and computer science to policy, economics, physics, cognitive science, biology, and public health, as well as founders, operators, and field-builders without traditional research backgrounds. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude, research potential, or relevant operational experience. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Biosecurity, Founding & Field-Building.
In stage 2, applicants apply to streams within those tracks as well as completing track specific evaluations.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.