MATS Summer 2026

The Summer 2026 program will run from June through August. It will be largest MATS program to date with 120 scholars and 100 mentors. Fellows will be connected with mentors or organizational research groups, such as Anthropic's Alignment Science team, UK AISIRedwood ResearchARC, and LawZero, to collaborate on a research project over the summer. Some fellows will be offered a 6+ month extension to continue this collaboration.

Program phases

Key dates for the application and admissions timeline

1. Applications

General Application (December 16th to January 18th) 

Applicants fill out a general application which should take 1-2 hours. Applications are due by January 18th.

Additional Evaluations (Late January through March)

Applicants that are advanced in the applications process go through additional evaluations including reference checks, coding tests, work tests, and interviews. Which evaluations you will undergo depend on the mentors and streams you apply to.

Admissions Decisions (Early April)
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.

2. Main Program
3. Extension Phase
4. Post-program

Summer 2026 Streams

MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance. MATS fellows participate in a research stream consisting of their mentor(s) and other mentees. You can specify which tracks and streams to apply to in the general application. Each stream provides its own research agenda, methodology, and mentorship focus. 

London
Empirical
Interpretability

Neel takes a pragmatic approach to interpretability: identify what stands between where we are now and where we want to be by AGI, and then focus on the subset of resulting research problems that can be tractably studied on today's models. This can look like diving deep into the internals of the model, or simpler black box methods like reading and carefully intervening on the chain of thought - whatever is the right tool for the job. This could look like studying how to detect deception, understanding why a model took a seemingly concerning action, or fixing weak points in other areas of safety, e.g. using interpretability to stop models realising they are being tested. You can learn more about Neel's approach in this podcast.

He has spent far too much time having MATS scholars, and has worked with ~60 so far - he’s excited to take on even more!

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Policy & Strategy
Strategy & Forecasting, Policy & Governance

We are interested in mentoring projects in AI forecasting and governance. This work would build on the AI 2027 report to either do more scenario forecasting or explore how to positively affect key decision points, informed by our scenario.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
Grand Rapids
Theory
Agent Foundations

Agent Foundations research focused on clarifying conditions under which humans can justifiably trust artificial intelligence systems.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
AI Welfare, Scalable Oversight, Compute Infrastructure

Alignment is solved for models in the current paradigm. This shifts the threat model to good old human conflict, so I'm excited about coordination tech (AI cooperation, datacenter workload verification). For aligning future models, we have to forecast what future AGIs will look like and solve issues before they come up. I’m excited about models that maintain their goodness under self-directed learning and can align their successor.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Control, Monitoring

This stream focuses on empirical AI control research, including defending against AI-driven data poisoning, evaluating and attacking chain-of-thought monitorability, and related monitoring/red-teaming projects. It is well-suited to applicants already interested in AI safety with solid Python skills, and ideally prior research or familiarity with control literature/tools (e.g. Inspect/ControlArena). 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Security, Dangerous Capability Evals

Building realistic defensive cybersecurity benchmarks. Asymmetric Security responds to real cyber incidents and therefore holds data not available in the public domain. We would like to work with MATS scholars to build realistic benchmarks grounded in these real cyber incidents. 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Theory
Interpretability

The Alignment Research Center is a small non-profit research group based in Berkeley, California, that is working on a systematic and theoretically grounded approach to mechanistically explaining neural network behavior. We are interested in scholars with a strong math background and mathematical maturity. If you'd be excited to work on the research direction described in this blog post – then we'd encourage you to apply!

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Control, Model Organisms, Red-Teaming, Scheming & Deception

This coalition of mentors make up the “megastream”. This stream spans a range of empirical research areas in AI safety on LLMs, including AI control, scalable oversight, model organisms, model internals, model welfare, security, and more. You’ll be pitched, and have the option to pitch, a variety of safety research projects, and then be matched to projects and mentors based on your interests/preferences on research and what you’d like to get out of MATS. Scholars in this stream frequently receive funding and continued mentorship after MATS to complete their research project, usually leading to a (co-)first author paper. People in this stream often end up in long-term homes for safety research after MATS (e.g. Anthropic, Redwood Research, OpenAI).

Megastream mentors share an application, tend to collaborate and co-mentor projects together, and generally share infrastructure to streamline the scholar experience. By applying to this stream, you are being considered for all of the megastream mentors. In the application process, you can indicate particular mentors you are interested in working with.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Interpretability, Red-Teaming, Monitoring

Arthur Conmy's MATS Stream focuses on evaluating interpretability techniques on current and future AI Safety problems.

This can involve creating new safety techniques, as well as creating benchmarks and measuring performance against baseline techniques.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Technical Governance
Strategy & Forecasting, Policy & Governance

In the face of disaster, I predict the government will be forced to play insurer of last resort, whether for a particular lab, or society at large. (See this, for example). Designed well, I believe a federal insurance backstop could internalize catastrophic negative externalities; designed poorly, it will simply be a subsidy for AI companies. I want to design the good version, so we have it ready.

I encourage people with inverse game theory (mechanism design) expertise to apply, but don't be deterred if you don't have this expertise.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Interpretability

This stream focuses on representations that underlie how language models generalize, for example representations of personas, goals, or training data components.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Interpretability, Model Organisms, Red-Teaming, Safeguards, Scheming & Deception

We study applications of singular learning theory (SLT) to AI safety, with a focus on interpretability and alignment. Ideal candidates come from a strong technical background in mathematics, physics, computer science, or biology, and aren't afraid to get their hands dirty with ML experiments. We don't expect you to have deep expertise in SLT, but a shallow familiarity will help. 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
New York City
Empirical
Biorisk, Security, Dangerous Capability Evals

I have two broad areas.

Security:

I am interested in building demonstrations for hacking real-world AI deployments to show that they are not secure. The goal is to force companies to invest in alignment techniques that can solve the underlying security issues.

Benchmarks:

I am interested in building benchmarks to determine how generalizable modern LLM techniques actually are, now that we are no longer in the pre-training scaling era.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Control, Monitoring, Safeguards, Dangerous Capability Evals, Scheming & Deception

This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Policy & Strategy
AI Welfare, Strategy & Forecasting, Policy & Governance

AI macrostrategy: strategic questions about how the transition to advanced AI will happen, and what we can do now to prepare for it.

Topics of interest include better futures, power concentration, takeoff speeds, deals with AIs, space governance, and acausal trade.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Policy & Strategy
Policy & Governance

We're mostly interested in supervising governance research focused on international AI governance, and particularly inter-state collaboration and/or avoidance of misunderstandings with respect to very advanced AI system development and deployment. 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
Washington, D.C.
Compute Infrastructure
Compute Infrastructure, Security

In this project, we will explore GPU side-channel attacks to extract information about model usage. A simple example is to observe (via radio, power fluctuations, acoustics, etc.) which experts were used in each forward pass of an MOE model, then use those observations to guess which tokens were produced.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
New York City
Empirical
Monitoring, Dangerous Capability Evals, Scalable Oversight, Safeguards

I'm interested in mentoring projects related to reward hacking and monitoring (agentic) models that produces long and complex trajectories. Scholar will have freedom to propose projects within this scope. Expect 30-60min 1-1 time on zoom.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Control, Monitoring, Red-Teaming, Scalable Oversight, Scheming & Deception
Mentorship structure
Desired scholar characteristics
Fellow project selection process
Washington, D.C.
Technical Governance
Policy & Governance, Compute Infrastructure

Janet Egan will mentor scholars working on policy-relevant questions at the intersection of AI compute, geopolitics, and infrastructure. Potential projects include analyzing remote access to AI chips (e.g., via cloud providers in China), mapping and interpreting the global buildout of AI data centers and energy infrastructure, and developing politically informed strategies for US–China cooperation on AI risk. The mentee will lead their research project with weekly guidance, feedback, and optional career and policy insights.

Mentorship structure
Desired scholar characteristics
Fellow project selection process

Related research

No items found.

Community at MATS

MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.

MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.