MATS Autumn 2026

The Autumn 2026 program will run for 10 weeks in Berkeley, CA and London, UK from September 28th to December 4th. Fellows will receive mentorship from world-class researchers and at organizations like Anthropic, Google DeepMind, OpenAI, Redwood Research, and ARC, with the option to apply for a 6–12 month funded extension beyond the main program. For the first time, we are running Founding & Field-Building and Biosecurity tracks.

Applications are now open. Apply by June 7th.

Program phases

Key dates for the application and admissions timeline

1. Applications

General Application (May 12th to June 7th) 

Applicants fill out a general application to individual tracks which should take 1-2 hours. Applications are due by June 7th EOD AOE.

Additional Evaluations (June 7th to late July)

After an initial evaluation, applicants will apply to individual streams listed below. Additionally, applicants undergo a variety of track specific evaluations including coding tests, writing reviews, work tests, and interviews. Which evaluations you will undergo depend on the tracks, streams and mentors you apply to.

Admissions Decisions (Late July to early August)
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.

Autumn 2026 Timeline:

2. Main Program
3. Extension Phase
4. Post-program

Autumn 2026 Streams

In stage one, you apply to one or more tracks (broad research areas): Empirical, Theory, Strategy & Forecasting, Policy & Governance, System Security, Biosecurity, and Founding & Field-Building. In stage two, advancing applicants choose specific streams within those tracks, each led by one or more mentors with their own research agenda. You can view this list as a grid here.

Additional streams will be added over the course of May.

No items found.

Projects on this stream cluster into a few broad areas from the empirical track: scalable oversight, AI control, monitorability and interpretability, adversarial robustness, and security. 

Most fellows will work closely with one or two mentors on something that fits into the mentors' ongoing research. The above list of mentors above is tentative.

Read more
Desired scholar characteristics
Empirical

The Redwood Research stream is looking for fast empirical iterators and strategists to work on control research.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.

Read more
Desired scholar characteristics
Founding and Field-Building

Eleos AI Research is a nonprofit working to ensure the interests of AI systems are appropriately taken into account as we navigate transformative AI. We are looking for competent generalists to help scale up our AI welfare field-building by running high-impact events, or to otherwise amplify our research, communications, and operational capacity.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Biosecurity

Therapeutics may have durable advantages over pathogens even in the limit of technological progress. How can therapeutic development and manufacturing be made resilient under biorisk scenarios? How can AI progress be maximally leveraged for defense?

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Biosecurity

This stream will work on projects that empirically assess national security threats of AI misuse (CBRN terrorism and cyberattacks) and improve dangerous capability evaluations. Threat modeling applicants should have a skeptical mindset, enjoy case study work, and be strong written communicators. Eval applicants should be able and excited to help demonstrate concepts like sandbagging elicitation gaps in an AI misuse context.

Read more
Policy and Governance

This stream will focus on impact-oriented technical AI governance research work, potentially including research on open-weight models, applied AI safeguards research, AI incidents, technically rigorous AI policy, etc. 

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Founding and Field-Building

Forecasting, real world applications of world modeling with LLMs, formal & semiformal verification, capability evals design, coordination mechanisms, collective intelligence applications, or gradual disempowerment. 

Read more
Mentorship structure
Desired scholar characteristics
Project selection process
Empirical

This stream focuses on loss of control research.

Read more
Mentorship structure
Desired scholar characteristics
Project selection process

Community at MATS

MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.

MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.