MATS is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. The main goal of MATS is to help scholars develop as AI alignment researchers.

Updated
December 10, 2025
Location
Berkeley, London
Job description
Applications will be reviewed on a rolling basis.
MATS Research aims to find and train talented individuals for what we see as the world's most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.
We are generally looking for candidates who:
As a Research Manager, you will play a crucial role in accelerating and guiding AI safety and security researchers, facilitating high-impact projects, and contributing to the overall success of our program. This role offers a unique opportunity to develop your skills, play a significant role in the field of AI safety, and work with top researchers from around the world.
Your day to day will involve talking to both scholars and mentors to understand the needs and direction of their projects. This may involve providing feedback on papers, becoming integrated into the research team in a supporting role, and/or ensuring that there is a robust plan to get from where the project is now to where it needs to be. Longer-term, MATS Research Managers have grown to lead their own teams of Research Managers, and spearhead major initiatives at MATS.
We are excited for candidates that can augment their work as a research manager by utilizing their pre-existing expertise in one or more of the following domains:
We welcome applications from individuals with diverse backgrounds, and we strongly encourage you to apply if you fit into at least one of these profiles:
If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!
We expect especially strong applicants to have deep experience in at least one of the following areas:
Visa sponsorship may be possible but is not guaranteed; we encourage all interested candidates to apply
Compensation will be $130,000-$200,000 annually for Berkeley roles and £85,000-£130,000 for London roles, depending on experience and location.
40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main offices. Research Managers will be based either in person at our office in Berkeley, California, or in person at our office in Old Street, London. We are open to hybrid working arrangements for exceptional candidates.
Please fill out the form here. In your application, please indicate your preference regarding being based in Berkeley or in London. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.
MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.
Join us in shaping the future of AI safety and security research!
The MATS Program is an independent research and educational initiative connecting emerging researchers with mentors in AI alignment, governance, and security.
Each MATS cohort runs for 12 weeks in Berkeley, California, followed by an optional 6–12 month extension in London for selected scholars.