The Autumn 2026 program will run for 10 weeks in Berkeley, CA and London, UK from September 28th to December 4th. Fellows will receive mentorship from world-class researchers and at organizations like Anthropic, Google DeepMind, OpenAI, Redwood Research, and ARC, with the option to apply for a 6–12 month funded extension beyond the main program. For the first time, we are running Founding & Field-Building and Biosecurity tracks.
Applications are now open. Apply by June 7th.

Key dates for the application and admissions timeline
General Application (May 12th to June 7th)
Applicants fill out a general application to individual tracks which should take 1-2 hours. Applications are due by June 7th EOD AOE.
Additional Evaluations (June 7th to late July)
After an initial evaluation, applicants will apply to individual streams listed below. Additionally, applicants undergo a variety of track specific evaluations including coding tests, writing reviews, work tests, and interviews. Which evaluations you will undergo depend on the tracks, streams and mentors you apply to.
Admissions Decisions (Late July to early August)
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.

The main program takes place from September 28th to December 4th of 2026. It is an intensive research phase, where fellows work full time on a research project in AI alignment, security, field-building, or governance. Fellows' research directions will typically be chosen through a collaborative process with their mentors, and fellows are expected to develop their independent research direction as the program continues.
While mentor support will vary depending on the project and mentors, mentors are expected to spend at least 1 hour/week working with each of their scholars, and some spend much more time. Scholars will also receive support from MATS’s Research Management team, who help to scope out and structure research direction. Depending on which stream you participate in, you may collaborate with other fellows in your stream.
By the middle of the program, fellows will be expected to write a report on their projects’ threat model, theory of change, and project deliverables. At the end of the program scholars will be expected to have a tangible research output. In past cohorts, this has involved presenting at a fellow symposium on work conducted over the course of MATS.
Educational seminars and workshops will be held 2-3 times per week. Previously, speakers have included Buck Shlegeris from Redwood Research, Adam Gleave from FAR AI, Neel Nanda from Google DeepMind, William Saunders from OpenAI, Andrew Critch from CHAI, Lennart Heim from GovAI, Ajeya Cotra from Open Philanthropy, and more.
The extension phase starts in December of 2026, soon after the end of the main program. Fellows who demonstrate promise as independent researchers during the main program can apply for the MATS extension phase. Acceptance into the extension is based on mentor evaluation and MATS review of proposed research.
In recent cohorts, ~80% of fellows who apply have been accepted.The extension phase offers a default additional 6-months of funding, with the ability to later apply for a 6-month continuation.
Extension fellows primarily work from the MATS London or Berkeley offices, with the possibility of working from other AI safety hubs or fully remotely.For accepted extension fellows, MATS arranges funding for stipends and housing ($7,680/month), as well as for compute ($8,000/mo), creating a seamless transition into this advanced phase of the program.
MATS aims to accelerate researchers who will:
MATS alumni have gone on to publish safety research, join alignment organizations, including Anthropic and MIRI, and found an alignment research lab. You can read more about MATS alumni here.
In stage one, you apply to one or more tracks (broad research areas): Empirical, Theory, Strategy & Forecasting, Policy & Governance, System Security, Biosecurity, and Founding & Field-Building. In stage two, advancing applicants choose specific streams within those tracks, each led by one or more mentors with their own research agenda. You can view this list as a grid here.
Additional streams will be added over the course of May.
Projects on this stream cluster into a few broad areas from the empirical track: scalable oversight, AI control, monitorability and interpretability, adversarial robustness, and security.
Most fellows will work closely with one or two mentors on something that fits into the mentors' ongoing research. The above list of mentors above is tentative.
Essential:
Preferred (at least one of):
The Redwood Research stream is looking for fast empirical iterators and strategists to work on control research.
Depending on the mentor:
We are looking for people who are:
We will assign projects by default but are open to getting pitched on projects.
This stream will focus on monitoring, stress-testing safety methods, and evals, with a focus on risks from scheming AIs. Examples include (black-box) AI control techniques, white-box monitors (probes etc.), chain-of-thought monitoring/faithfulness, building evaluation environments, and stress-testing mitigations.
We are looking for scholars with strong machine learning engineering skills, as well as a background in technical research. While we’ll provide weekly guidance on research, we expect scholars to be able to run experiments and decide on low-level details fairly independently most of the time. We’ll propose concrete projects to choose from, so you should not expect to work on your own research idea during MATS. We strongly encourage collaboration within the stream, so you should expect to work in teams of 2-3 scholars on a project, hence good communication and team skills are important.
Eleos AI Research is a nonprofit working to ensure the interests of AI systems are appropriately taken into account as we navigate transformative AI. We are looking for competent generalists to help scale up our AI welfare field-building by running high-impact events, or to otherwise amplify our research, communications, and operational capacity.
By default, I expect to meet with each fellow for at least 60 minutes per week (possibly split into 2 x 30 minute meetings). For projects that require a lot of input (e.g. event organizing), we can scale that up as needed. I’ll also be available for ad hoc meetings, and can be reached asynchronously on Slack or by email. At Eleos, we each post a daily update in Slack; I would like fellows to do the same.
The ideal candidate:
We'll meet at the start of the program to discuss ideas in the areas listed above, as well as ideas that fellows would like to pitch. We’ll jointly decide on a project that aligns with Eleos’s priorities as well as the goals and skills of the fellow.
Therapeutics may have durable advantages over pathogens even in the limit of technological progress. How can therapeutic development and manufacturing be made resilient under biorisk scenarios? How can AI progress be maximally leveraged for defense?
I expect we will spend some time at the beginning scoping out a project that is a good fit for the fellow's background and interests. Then, project supervision will depend strongly on the nature of the project. Generally, I expect the fellow to take ownership of the work, with regular mentorship and feedback to maintain alignment and help resolve challenges as they arise.
Essential:
Preferred:
Not a good fit:
We will jointly define the exact project with the fellow, based on their background, interests, and comparative strengths, as well as our current priorities. I expect strong fellows may have their own questions and ideas, but we will provide substantial guidance early on to help turn those ideas into a clear, useful, and realistically scoped project.
In the first phase, we will discuss several possible directions, identify where the fellow can make the strongest contribution, and agree on concrete outputs. Once the project is scoped, I expect the fellow to take ownership of the work, with regular mentorship and feedback to keep it aligned and help overcome any difficulties.
This stream will work on projects that empirically assess national security threats of AI misuse (CBRN terrorism and cyberattacks) and improve dangerous capability evaluations. Threat modeling applicants should have a skeptical mindset, enjoy case study work, and be strong written communicators. Eval applicants should be able and excited to help demonstrate concepts like sandbagging elicitation gaps in an AI misuse context.
This stream will focus on impact-oriented technical AI governance research work, potentially including research on open-weight models, applied AI safeguards research, AI incidents, technically rigorous AI policy, etc.
By default, we should expect to meet 2-3 times per week as a full group, plus ad hoc project-specific meetings.
Green flags include:
I will work with MATS scholars to iteratively refine project ideas in whatever area our interests and skills overlap. Above all, project selection will hinge on having a clear (and good) theory of impact.
In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.
We will have weekly 1-1's and weekly team lunch, as well as asynchronous communication over Slack. Mentees are always welcome to reach out at any time, in case guidance is needed outside of usual meeting times.
Scholars should mostly figure things out on their own outside of meetings
Ideal candidates would have:
Mentor(s) will talk through project ideas with scholar
Forecasting, real world applications of world modeling with LLMs, formal & semiformal verification, capability evals design, coordination mechanisms, collective intelligence applications, or gradual disempowerment.
Once a week 30min-2hour call depending on the needs, meeting in person if in Bay Area/London. Very responsive over Slack.
Interest in moving from theory to applied work, especially on coordination, negotiation and forecasting. If there are any resources that Metaculus and/or AI Objectives Institute can provide with real world applications that would be a great fit. Formal verification and specification knowledge would be a great match but not required. I am not personally interested in pursuing paper publications, but if the fellow is interested in this, I am happy to support them.
I am more interested in the fellow being passionate and excited about the project direction, and for me to believe that I am the right mentor to support them with my knowledge base.
This stream focuses on loss of control research.
Working with me doc: https://docs.google.com/document/d/18sRsk2e_JZUehm70OT9sFWGNG1zs7WLhDa-k-xAToSk/edit?tab=t.0#heading=h.tcuhf2n9v6we
GDM stream will propose and distribute projects
MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.
MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.