Program Manager

MATS is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, and governance. The main goal of MATS is to help scholars develop as AI alignment researchers.

Updated

December 10, 2025

Location

Berkeley

Job description

Applications will be reviewed on a rolling basis. Apply here by October 17 for priority.

About The Organization

MATS Research aims to find and train talented individuals for what we see as the world's most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.

We are generally looking for candidates who:

  • Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
  • Are self-motivated and can take on new responsibilities within MATS over time

The Role

As a Program Manager, you will design and run the systems that identify, select, and support high-potential AI safety and security researchers. You'll lead our application and selection processes; build scalable assessments; coordinate mentors, reviewers, and partners; and maintain the infrastructure that keeps the program running smoothly. You'll analyze program outcomes and ecosystem talent needs to make recommendations and help shape organizational strategy. Depending on your strengths, you may specialize—e.g., take point as Program Talent Manager focused on sourcing and selection.

Your day to day will change depending on the phase of the program, so at any given time you may be focusing on running the MATS application process, managing scholar assessments, or analyzing program outcomes. You will be coordinating with other MATS team members, mentors, and external stakeholders to see projects to successful completion.

Who We're Looking For

We welcome applications from individuals with diverse backgrounds, and we strongly encourage you to apply if you fit into at least one of these profiles:

  • Professionals with previous experience in AI safety and security research (e.g., research program staff, researchers, project managers, university group leaders, etc.)
  • Project or hiring managers from tech-focused organizations or a STEM industry
  • Entrepreneurs with experience working on small, fast-scaling teams

If you do not fit into one of these profiles but think you could be a good fit for this role, we are still excited for you to apply!

Key Responsibilities

  • Lead the MATS application process to identify and attract strong researchers for our program
    • Coordinate with mentors, reviewers, and other team members to establish bespoke selection processes for different groups of mentors
    • Manage our applications infrastructure, either building on our pre-existing Airtable + Fillout stack, or proposing a migration to a different system
    • Build workflows for scalable applicant assessments, like code screenings, work tests, or other evaluations
    • Develop a talent sourcing strategy to inform our outreach and advertising to ensure we are reaching the best potential applicants
  • Perform impact analyses to determine the impact of the MATS program and opportunities for improvement
    • Determine what information should be collected for impact analysis and coordinate with team members to do this
    • Analyze the collected data to draw conclusions about program outcomes
    • Communicate results and recommendations to internal and external stakeholders
  • Gather and analyze information about the talent needs of the AI safety and security ecosystem
    • Make connections with hiring managers, grantmakers, or others that can share context on current talent needs
    • Communicate insights with MATS leadership to inform direction of the organization
  • Coordinate and design programming like seminars and workshops that happen during the MATS program
  • Ensure effective communication with mentors and scholars about program logistics and responsibilities
  • Contribute to the strategic planning and development of MATS
    • Spearhead other miscellaneous internal projects
    • Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently
    • Provide input into strategy discussions

Depending on the comparative strength of applicants, we may hire candidates to focus on a subset of these responsibilities. In particular, a candidate with strong fit for running the MATS application process may focus on that as their sole responsibility with the title Program Talent Manager.

Essential Qualifications and Skills

  • 2-5 years experience across a subset of the following:
    • Project management
    • Program management
    • Event management
    • Hiring or other talent search
  • Broad familiarity with AI safety concepts and the landscape of actors in the AI ecosystem, especially as necessary for strategic decisions
  • Excellent communication skills, both verbal and written
  • Strong critical thinking and problem-solving abilities
  • Ability to analyze data to draw conclusions
  • Ability to explain complex concepts clearly and concisely
  • Proactive and self-driven work ethic
  • Strong alignment with our mission of reducing risk from the development of advanced AI

Desirable Qualifications and Skills

We expect especially strong applicants to have deep experience in at least one of the following areas:

  • Demonstrated excellence in project or people management
  • Experience in technical AI safety research, AI policy, security, or governance
  • Experience in hiring for technical or research roles
  • Leadership experience
  • Proficiency with database and form software like Airtable and Fillout
  • Familiarity with Squarespace or web design
  • Proficiency with AI tools, like Claude Code or Cursor, for hacking together tools or automating workflows

What We Offer

  • High-leverage opportunity to identify and help launch the careers of talented AI safety and security researchers
  • Professional development and skill enhancement opportunities
  • Funding and support for professional development, including attending industry and professionally relevant conferences
  • Flexibility for remote work in between cohorts
  • Collaborative and intellectually stimulating work environment
  • Competitive salary (see below)
  • Access to our office spaces in Berkeley, London, and other partnered office spaces
  • Medical insurance, including dental, vision, and life insurance
  • Traditional and Roth 401(k) options
  • Meals provided Monday-Friday

Visa sponsorship may be possible but is not guaranteed; we encourage all interested candidates to apply.

Compensation

Compensation will be $130,000-$200,000 annually, depending on experience and location.

Working hours and location

40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main office in Berkeley, California. We are open to alternative working arrangements for exceptional candidates.

How to apply

Please fill out the form here. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.

MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.

Join us in shaping the future of AI safety and security research!

Frequently asked questions

What is the MATS Program?
How long does the program last?