The MATS Policy & Governance Track supports research on how advanced AI is governed and how it should be governed. As AI capabilities accelerate, many of the hardest problems are no longer purely technical. They involve international coordination, governance under uncertainty, state capacity, regulatory design, and translating safety goals into real-world policies and institutions. Decisions made in the next 6–12 months will shape what labs build, what governments require, and what oversight looks like for years to come.
The track covers a wide range of research areas. Some streams focus on concrete governance mechanisms, including evaluations, standards, safeguards, monitoring systems, and enforcement structures. Others conduct policy and institutional analysis, such as comparative reviews of governance regimes, regulatory frameworks, and international coordination challenges. A third set engages broader questions, like how advanced AI may reshape global power dynamics or what governance approaches could meaningfully reduce catastrophic risk.
We are looking for fellows who can reason rigorously and write clearly about these topics. Strong candidates have come from policy, economics, law, political science, public administration, security studies, philosophy, computer science, forecasting, sociology, history, journalism, and science and technology studies, among other backgrounds.
Fellows are matched to mentors based on fit, and projects are scoped to produce concrete artifacts by program end e.g., policy memos, regulatory comments, technical specifications, comparative analyses, and peer-reviewed research. Target audiences span AISI staff, lab governance teams, regulators, standards bodies, and the broader research and policy communities shaping frontier AI governance.
In the face of disaster, I suspect the government will be forced to play insurer of last resort, whether for a particular lab, or society at large. (I'm not the only to suspect this – see e.g. here). Designed well, I believe a federal insurance backstop could internalize catastrophic negative externalities; designed poorly, it will simply be a subsidy for AI companies. I want to design the good version, so we have it ready.
I encourage people with mechanism design (a.k.a. reverse game theory) expertise to apply, but don't be deterred if you don't have this expertise.
1 hour weekly meetings by default for high-level guidance. I'm active on Slack and typically respond within a day for quick questions or conceptual (not code) debugging. Between meetings, expect async back-and-forth on paper structure, or experiment design and results. Scholars can also schedule ad-hoc calls if they're stuck or want to brainstorm—just ping me on Slack.
Depending on the project, I may help with writing.
If interested in the technical paper, applicants must:
For all applicants:
Preferred:
Nice to haves:
Not a good fit:
For technical versions of this project, I suspect the project will automatically be fairly tightly scoped based on the scholar's expertise. I will pose the core challenge and over the first week, the scholar and I will hammer out exactly what theoretical questions need answering + empirical surveys need running.
For non-technical versions of this project, I will pitch a few different projects and scholars will try ones they find interesting for a week. In week 2 we'll settle on one together.
Janet Egan will mentor scholars working on policy-relevant questions at the intersection of AI compute, geopolitics, and infrastructure. Potential projects include analyzing remote access to AI chips (e.g., via cloud providers in China), mapping and interpreting the global buildout of AI data centers and energy infrastructure, and developing politically informed strategies for US–China cooperation on AI risk. The mentee will lead their research project with weekly guidance, feedback, and optional career and policy insights.
After discussing and agreeing a topic, the mentee will play a leading role in driving the research forward, and be provided with weekly check-ins, advice and written feedback. Optional support would include introductions to others in the field, insights into policymaking and career advice.
Proactive, motivated individuals with experience getting deep on techy issues. Excellent attention to detail and a curious mindset. Strong communication skills and an interest in conveying technical concepts to policy and generalist audiences. An interest in data centers, geopolitics and/or energy infrastructure is welcome.
Mentor will talk through project ideas with scholar
Escalation risks from state perceptions of AI capability, AI-enabled targeting, AI-enabled decision manipulation, and the impact of AI integration into nuclear command and control.
Mentorship will mostly consist of calls, sorting through research ideas and providing feedback. I'll be up to review papers, and potentially to meet in person depending on timing.
Looking for intellectually curious and honest scholars, with some background on topics related to national security, game theory, or AI-enabled military and influence capabilities.
I'll talk through project ideas with scholar, or the scholar can pick from a list of projects
This stream focuses on AI policy, especially technical governance topics. Tentative project options include: technical projects for verifying AI treaties, metascience for AI safety and governance, and proposals for tracking AI-caused job loss. Scholars can also propose their own projects.
We'll meet once or twice a week (~1 hr/wk total, as a team if it's a team project). I'm based in DC, so we'll meet remotely. I (Mauricio) will also be available for async discussion, career advising, and detailed feedback on research plans and drafts.
No hard requirements. Bonus points for research experience, AI safety and governance knowledge, writing and analytical reasoning skills, and experience relevant to specific projects.
I'll talk through project ideas with scholar
Research papers (technical governance or ML) related to evaluating and mitigating dangerous AI capabilities, with a focus on what's actionable and relevant for AGI companies
I like to get daily standup messages about progress that has made on the project, and I'm happy to provide some quick async feedback on new outputs. I'll also have weekly meetings. I'm based in Constellation in Berkeley.
Good writers/researchers who can work independently and autonomously! I'm looking for scholars who can ship a meaningful research output end-to-end and ideally have prior experience in writing relevant papers.
I may assign a project, have you pick from a list of projects, or talk through project ideas with you.
Making society safe from AI doesn't just mean making safe AI: we're figuring out how to uplift human collective intelligence, manage a highly multiagent world, improve foresight and institutional competence, ideally learning how to make best positive use of frontier AI systems as we go. FLF has a small, sharp team of researchers with a wide network, and we're looking to nurture new and missing approaches to minimising large-scale risks while steering to a flourishing future.
Willing to devote a few hours per week to this - I'll keep a 30m or 1h slot available weekly, and interact on Slack circa daily. Some closer projects might be much more interactive.
Depends a lot on direction. Ideally be able to make proposals and dig into things somewhat independently. Be good at explaining your thinking, and able+willing to teach me things!
For collective intelligence/human reasoning, I'd usually want someone very familiar with software production, at least skilled in software development or in product management and prototyping. Other candidates with great vision can succeed here if they're able to work with complementary talent to get things going.
For foresight, any of: polymathic/multi-STEM/futurism background, deep expertise in bio and/or AI, natsec experience or connections, unusual writer/game dev talent, safety engineering background, other background that you think I might want to hear about.
For multiagent accountability: law, economics, politics, history, or a combination, plus some familiarity with AI and agents.
I'll ask for interests and (if you have them) a proposal or two right away. We'll spend the first week or two iterating that, discussing other options, and maybe trying out little experiments. Likely we'll pick a direction then, but it's also fine if we pivot later.
Peter Henderson’s stream focuses on developing safe, aligned AI agents, with projects on scalable oversight rules informed by law and game theory, safe long-horizon exploration, and measuring “jagged” capability/safety frontiers. Scholars will join an independently driven, engineering-heavy research environment, collaborating with other MATS scholars and PhD students, with weekly 1:1s and active async mentorship.
45 min weekly meetings by default for high-level guidance. I'm active on Slack for quick questions or conceptual (not code) debugging. Expect async back-and-forth on experiment design and results between meetings. Scholars can also schedule ad-hoc calls if they're stuck or want to brainstorm—just ping me on Slack. Other team members (PhD students) will also be around to help brainstorm, getting unstuck.
Essential:
Nice to have, but not necessary:
Not a good fit:
Mentors in the group will pitch projects, and scholars will try ones they find interesting for a week. We'll iterate together at the end of week 1 and pick final assignments in week 2.
International coordination to reduce frontier AI risks, with a focus on China and the West.
1 hour weekly meetings by default for high-level guidance. We are active on Slack and typically respond within a day for quick questions.
Good understanding of international AI governance developments that are relevant to frontier AI safety (e.g., the Summit series, AISI network)
Good understanding of Chinese AI governance and safety (key players, key trends and institutional structures)
Good understanding of key frontier risk domains (CBRN, cyber, loss of control)
Some understanding of broader US-China relations and how they frame US-China relations on AI/AGI specifically
We will provide a shortlist of projects that we are keen for the scholar to work on in Week 1. We'll ask scholars to scope these in the 1st week and make a determination about which project to focus on in Week 2.
This stream will focus on impact-oriented technical AI governance research work, potentially including research on open-weight models, applied AI safeguards research, AI incidents, technically rigorous AI policy, etc.
By default, we should expect to meet 2-3 times per week as a full group, plus ad hoc project-specific meetings.
Green flags include:
I will work with MATS scholars to iteratively refine project ideas in whatever area our interests and skills overlap. Above all, project selection will hinge on having a clear (and good) theory of impact.
I (Cas) work on a range of projects from technical safeguards to technical governance. This stream follows an academic collaboration model and will work will likely focus on technical topics in AI governance.
2-3 meetings per week plus regular messaging and collaborative writing.
Green flags include:
Mentor(s) will talk through project ideas with scholar.
The MATS Program is a 10-week research fellowship designed to train and support emerging researchers working on AI alignment, transparency and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, field-building and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from September 28th to December 4th, with the extension phase for accepted fellows beginning in December.
MATS accepts applicants from diverse academic and professional backgrounds - from machine learning, mathematics, and computer science to policy, economics, physics, cognitive science, biology, and public health, as well as founders, operators, and field-builders without traditional research backgrounds. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude, research potential, or relevant operational experience. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Biosecurity, Founding & Field-Building.
In stage 2, applicants apply to streams within those tracks as well as completing track specific evaluations.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.