MATS is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. The main goal of MATS is to help scholars develop as AI alignment researchers.

Updated
December 10, 2025
Location
Berkeley
Job description
Applications will be reviewed on a rolling basis. Apply here by October 17 for priority.
MATS Research aims to find and train talented individuals for what we see as the world's most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.
We are generally looking for candidates who:
As a Program Manager, you will design and run the systems that identify, select, and support high-potential AI safety and security researchers. You'll lead our application and selection processes; build scalable assessments; coordinate mentors, reviewers, and partners; and maintain the infrastructure that keeps the program running smoothly. You'll analyze program outcomes and ecosystem talent needs to make recommendations and help shape organizational strategy. Depending on your strengths, you may specialize—e.g., take point as Program Talent Manager focused on sourcing and selection.
Your day to day will change depending on the phase of the program, so at any given time you may be focusing on running the MATS application process, managing scholar assessments, or analyzing program outcomes. You will be coordinating with other MATS team members, mentors, and external stakeholders to see projects to successful completion.
We welcome applications from individuals with diverse backgrounds, and we strongly encourage you to apply if you fit into at least one of these profiles:
If you do not fit into one of these profiles but think you could be a good fit for this role, we are still excited for you to apply!
Depending on the comparative strength of applicants, we may hire candidates to focus on a subset of these responsibilities. In particular, a candidate with strong fit for running the MATS application process may focus on that as their sole responsibility with the title Program Talent Manager.
We expect especially strong applicants to have deep experience in at least one of the following areas:
Visa sponsorship may be possible but is not guaranteed; we encourage all interested candidates to apply.
Compensation will be $130,000-$200,000 annually, depending on experience and location.
40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main office in Berkeley, California. We are open to alternative working arrangements for exceptional candidates.
Please fill out the form here. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.
MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.
Join us in shaping the future of AI safety and security research!
The MATS Program is an independent research and educational initiative connecting emerging researchers with mentors in AI alignment, governance, and security.
Each MATS cohort runs for 12 weeks in Berkeley, California, followed by an optional 6–12 month extension in London for selected scholars.