“This place feels like a home to me - a home for the whole AI Safety community, really. I joined MATS… as a research manager and it's been an extremely rewarding professional experience. My learning, personal growth, and positive impact per unit of time have been way higher than I anticipated. I'm so proud to have been part of what MATS is building - it really is a force for good in the world!”
Join our team
The ML Alignment & Theory Scholars (MATS) program aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence. We believe that ambitious young researchers from a variety of backgrounds have the potential to contribute to the field of AI alignment research.
We aim to provide the mentorship, curriculum, financial support, and community necessary to facilitate this contribution. Please see our theory of change for more details.
Don’t see an open role that’s the right fit? If you’re excited about joining us, we’d still love to hear from you—fill out our Expression of Interest form.
Applications are open for
-
Applications will be reviewed on a rolling basis. Apply [here] by October 17 for priority.
About The Organization
MATS Research aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.
We are generally looking for candidates who:
Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
Are self-motivated and can take on new responsibilities within MATS over time
The Role
As a Research Manager, you will play a crucial role in accelerating and guiding AI safety & security researchers, facilitating high-impact projects, and contributing to the overall success of our program. This role offers a unique opportunity to develop your skills, play a significant role in the field of AI safety, and work with top researchers from around the world.
Your day to day will involve talking to both scholars and mentors to understand the needs and direction of their projects. This may involve providing feedback on papers, becoming integrated into the research team in a supporting role, and/or ensuring that there is a robust plan to get from where the project is now to where it needs to be. Longer-term, MATS Research Managers have grown to lead their own teams of Research Managers, and spearhead major initiatives at MATS.
We are excited for candidates that can augment their work as a research manager by utilizing their pre-existing expertise in one or more of the following domains:
Theory - providing informed feedback to scholars on research direction, theory of impact, and helping MATS to assess new program offerings.
Engineering - helping scholars to become stronger research engineers, and building out the internal tooling of MATS.
Management - providing scholars with structure and accountability for their research, supporting mentors in people management, and helping pioneer improvements to MATS systems and infrastructure.
Communication - helping scholars to present their research in more compelling ways to influential audiences, contributing to research publications, and improving how MATS communicates its work.
Who We're Looking For
We welcome applications from individuals with diverse backgrounds and strongly encourage you to apply if you fit into at least one of these profiles:
Experienced professionals in the fields of technical research, governance research, and/or research management
Experienced team managers, especially those with technical or governance backgrounds
AI safety & security professionals looking to apply and further develop their management skills, diversify their experience, and/or broaden their impact across a spectrum of projects
Coaching and/or managing people in a supportive way, especially in the context of supporting technical researchers and/or scientists
Engineering product/project managers from tech-focused organizations or a STEM industry
If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!
Key Responsibilities
Work with world-class academic & industry mentors to:
Help their AI safety mentees (MATS scholars) stay productive and grow in their research skills
Drive AI safety research projects from conception to completion
Facilitate coordination and collaboration between scholars, mentors, and other collaborators
Organize and lead efficient research meetings
Work with developing AI safety researchers to:
Provide guidance and feedback on research directions, writeups, and planning out career trajectories
Connect them with relevant resources and domain experts to support their research
Contribute to the strategic planning and development of MATS:
Spearhead internal projects - past projects have included the design of program deliverables and development of upskilling resources
Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently
Provide input into strategy discussions
Essential Qualifications and Skills
2-5 years experience across a combination of the following:
Project management
Research management (not necessarily technical)
People management
Technical research
Governance or policy work
Mentoring
Excellent communication skills, both verbal and written
Strong listening skills and empathy
Strong critical thinking and problem-solving abilities
Ability to explain complex concepts clearly and concisely
Proactive and self-driven work ethic
Capacity to elicit and solve blockers in an efficient and rapid manner
Proactively enjoy working with others, stakeholder management
Strong alignment with our mission, and familiarity with the basic ideas behind AGI safety and catastrophic AI risk
Desirable Qualifications and Skills
We expect especially strong applicants to have deep experience in at least one of the following areas:
Demonstrated excellence in managing teams of people and/or multi-stakeholder projects
Deep familiarity with some subset of AI safety concepts and research
Experience in ML engineering, software engineering, or related technical fields
Experience in technical AI safety research, AI policy, security, or governance
Background in coaching or professional development
PhD or similarly extensive academic research experience
What We Offer
Opportunity to help accelerate and launch the careers of highly talented early-career AI Safety researchers
Opportunity to be intimately involved in impactful research projects: some Research Managers have been acknowledged as co-authors on the conference papers they supported
Chance to work with and learn from top researchers in the field
Professional development and skill enhancement opportunities, e.g. deep-dives into bleeding-edge AI tools for research acceleration
Collaborative and intellectually stimulating work environment
Access to our office spaces in Berkeley, London, and other partnered office spaces
Medical insurance, including dental and vision
Visa support
Meals provided Monday-Friday
Compensation
Compensation will be $135,000-$200,000 annually, depending on experience and location.
Working hours and location
40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main offices. Research Managers will be based either in person at our office in Berkeley, California, or in person at our office in Old Street, London. We are open to hybrid working arrangements for exceptional candidates.
How to Apply
Please fill out the form here[]. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.
MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.
Join us in shaping the future of AI safety & security research!
-
Applications will be reviewed on a rolling basis. Apply [here] by October 17 for priority.
About The Organization
MATS Research aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.
We are generally looking for candidates who:
Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
Are self-motivated and can take on new responsibilities within MATS over time
The Role
As a Program Manager, you will design and run the systems that identify, select, and support high-potential AI safety & security researchers. You’ll lead our application and selection processes; build scalable assessments; coordinate mentors, reviewers, and partners; and maintain the infrastructure that keeps the program running smoothly. You’ll analyze program outcomes and ecosystem talent needs to make recommendations and help shape organizational strategy. Depending on your strengths, you may specialize—e.g., take point as Program Talent Manager focused on sourcing and selection.
Your day to day will change depending on the phase of the program, so at any given time you may be focusing on running the MATS application process, managing scholar assessments, or analyzing program outcomes. You will be coordinating with other MATS team members, mentors, and external stakeholders to see projects to successful completion.
Who We're Looking For
We welcome applications from individuals with diverse backgrounds, and we strongly encourage you to apply if you fit into at least one of these profiles:
Professionals with previous experience in AI safety & security research (e.g., research program staff, researchers, project managers, university group leaders, etc.)
Project or hiring managers from tech-focused organizations or a STEM industry
Entrepreneurs with experience working on small, fast-scaling teams
If you do not fit into one of these profiles but think you could be a good fit for this role, we are still excited for you to apply!
Key Responsibilities
Lead the MATS application process to identify and attract strong researchers for our program
Coordinate with mentors, reviewers, and other team members to establish bespoke selection processes for different groups of mentors
Manage our applications infrastructure, either building on our pre-existing Airtable + Fillout stack, or proposing a migration to a different system
Build workflows for scalable applicant assessments, like code screenings, work tests, or other evaluations
Develop a talent sourcing strategy to inform our outreach and advertising to ensure we are reaching the best potential applicants
Perform impact analyses to determine the impact of the MATS program and opportunities for improvement
Determine what information should be collected for impact analysis and coordinate with team members to do this
Analyze the collected data to draw conclusions about program outcomes
Communicate results and recommendations to internal and external stakeholders
Gather and analyze information about the talent needs of the AI safety & security ecosystem
Make connections with hiring managers, grantmakers, or others that can share context on current talent needs
Communicate insights with MATS leadership to inform direction of the organization
Coordinate and design programming like seminars & workshops that happen during the MATS program
Ensure effective communication with mentors and scholars about program logistics and responsibilities
Contribute to the strategic planning and development of MATS
Spearhead other miscellaneous internal projects
Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently
Provide input into strategy discussions
Depending on the comparative strength of applicants, we may hire candidates to focus on a subset of these responsibilities. In particular, a candidate with strong fit for running the MATS application process may focus on that as their sole responsibility with the title Program Talent Manager.
Essential Qualifications and Skills
2-5 years experience across a subset of the following:
Project management
Program management
Event management
Hiring or other talent search
Broad familiarity with AI safety concepts and the landscape of actors in the AI ecosystem, especially as necessary for strategic decisions
Excellent communication skills, both verbal and written
Strong critical thinking and problem-solving abilities
Ability to analyze data to draw conclusions
Ability to explain complex concepts clearly and concisely
Proactive and self-driven work ethic
Strong alignment with our mission of reducing risk from the development of advanced AI
Desirable Qualifications and Skills
We expect especially strong applicants to have deep experience in at least one of the following areas:
Demonstrated excellence in project or people management
Experience in technical AI safety research, AI policy, security, or governance
Experience in hiring for technical or research roles
Leadership experience
Proficiency with database and form software like Airtable and Fillout
Familiarity with Squarespace or web design
Proficiency with AI tools, like Claude Code or Cursor, for hacking together tools or automating workflows
What We Offer
High-leverage opportunity to identify and help launch the careers of talented AI safety & security researchers
Professional development and skill enhancement opportunities
Collaborative and intellectually stimulating work environment
Competitive salary (see below)
Access to our office spaces in Berkeley, London, and other partnered office spaces
Medical insurance, including dental, vision, and life insurance
Traditional and Roth 401(k) options
Visa support
Meals provided Monday-Friday
Compensation
Compensation will be $130k-200k annually, depending on experience and location.
Working hours and location
40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main office in Berkeley, California. We are open to alternative working arrangements for exceptional candidates.
How to Apply
Please fill out the form here[]. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.
MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.
Join us in shaping the future of AI safety & security research!
-
Applications will be reviewed on a rolling basis. Apply [here] by October 17 for priority.
About The Organization
MATS Research aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.
We are generally looking for candidates who:
Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
Are self-motivated and can take on new responsibilities within MATS over time
We are seeking compassionate individuals looking to apply their interpersonal skills in the field of AI alignment. As a Community Manager on the MATS team, you will help build a safe, supportive, and nurturing environment where researchers can flourish. You will work with the Community Lead to support scholar welfare, foster community connections, and enhance the MATS experience. Ideal candidates have experience in providing emotional support, conflict resolution, and creating inclusive environments.
About The Role
This role involves supporting scholar wellbeing, organizing community events, implementing feedback systems, and addressing community health concerns. A great Community Manager builds strong relationships with scholars, anticipates community needs, and creates support systems that promote both individual wellbeing and collective flourishing.
MATS is committed to fostering an inclusive and welcoming work environment. We welcome applications from individuals with varied backgrounds, and we know that resumes and careers don’t always fit a particular mold. We encourage you to apply even if your experience does not fully match the description.
Job Responsibilities
A Community Manager's primary role is to support the needs of scholars and improve the MATS community environment. This role reports to the Community Lead. Specific responsibilities include:
Building meaningful relationships with scholars through one-on-one meetings to learn about their needs, provide emotional support, talk through scholar challenges, and receive community feedback;
Coordinating with the Community Lead about scholar needs;
Collaborate with Research Management and Operations teams, as needed, to support scholars;
Supporting scholar orientation and offboarding;
Facilitate MATS community building by organizing and supervising socials, networking events, and workshops (for example, hosting board game nights, hikes, 1-on-1 speed networking, remote scholar events, lightning talks, and more);
Help improve the program based on scholar feedback;
Proactively identifying community health concerns and coordinating with appropriate team members to address them, connecting scholars with appropriate resources;
Support our engagement with the broader AI alignment and security community, deepening relationships with other organizations and facilitating collaboration with MATS scholars.
Qualifications:
2-5 years experience across a combination of the following:
Community building (with focus on relationship building)
People management
Events management
Counselling
Strong interpersonal skills (e.g., patience, active listening);
Strong communication skills;
High cognitive empathy;
Knowledge of emotional regulation tools;
Autonomy and proactivity;
Comfortable working with private and confidential information;
Experience providing emotional support and mediating conflicts;
Able to function in urgent and/or high-stakes situations, if necessary;
Some data analysis skills a plus, but not required;
Mission alignment (no pun intended!): We're working to ensure AI systems remain beneficial and aligned with human values. You don’t need to be an expert, but we’d love it if you have a desire to learn more.
Nice to haves:
We expect the most competitive applicants to have one or more of the following items:
Experience and/or training in coaching, counseling, and mentoring;
Experience with settings similar to MATS (e.g., university counseling);
Ombuds training;
Experience setting community policies, for example in an HR function;
Some knowledge of (or willingness to learn about) AI safety-specific concerns that sometimes bear on community health (e.g. potential consequences of transformative AI as described in this video).
What We Offer
High-leverage opportunity to identify and help launch the careers of talented AI alignment, security, and governance researchers
Professional development and skill enhancement opportunities, including training and attending industry and professionally-relevant conferences
Collaborative and intellectually stimulating work environment
Competitive salary (see below)
Access to our office spaces in Berkeley, London, and other partnered office spaces
Medical insurance, including dental, vision, and life
Traditional and Roth 401(k) options
Visa support
Meals provided Monday-Friday
Compensation
Compensation will be $83,000-$135,000 annually, depending on qualifications and experience.
Working hours and location
40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main office in Berkeley, California. This role is in-person for the duration of the MATS 12 week cohort and the 2 weeks surrounding it, and otherwise hybrid.
How to Apply
Please fill out the form here[]. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.
MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.
Join us in shaping the future of AI safety & security research!
Avery Griffin
Research Manager, 2024—2025