Research Manager (Berkeley, London)

Applications will be reviewed on a rolling basis. Apply by October 17 for priority.

About The Organization

MATS Research aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from advanced artificial intelligence. Our mission is to maximize the impact and development of emerging researchers through mentorship, guidance, and resources. We believe that ambitious researchers from a variety of backgrounds have the potential to contribute to the fields of AI alignment, control, security, and governance research. Through our research fellowship, we aim to provide the mentorship, financial support, and community necessary to aid this transition. Please see our website for more details.

We are generally looking for candidates who:

  • Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;

  • Are self-motivated and can take on new responsibilities within MATS over time

The Role

As a Research Manager, you will play a crucial role in accelerating and guiding AI safety & security researchers, facilitating high-impact projects, and contributing to the overall success of our program. This role offers a unique opportunity to develop your skills, play a significant role in the field of AI safety, and work with top researchers from around the world.

Your day to day will involve talking to both scholars and mentors to understand the needs and direction of their projects. This may involve providing feedback on papers, becoming integrated into the research team in a supporting role, and/or ensuring that there is a robust plan to get from where the project is now to where it needs to be. Longer-term, MATS Research Managers have grown to lead their own teams of Research Managers, and spearhead major initiatives at MATS.

We are excited for candidates that can augment their work as a research manager by utilizing their pre-existing expertise in one or more of the following domains:

  • Theory - providing informed feedback to scholars on research direction, theory of impact, and helping MATS to assess new program offerings.

  • Engineering - helping scholars to become stronger research engineers, and building out the internal tooling of MATS.

  • Management - providing scholars with structure and accountability for their research, supporting mentors in people management, and helping pioneer improvements to MATS systems and infrastructure.

  • Communication - helping scholars to present their research in more compelling ways to influential audiences, contributing to research publications, and improving how MATS communicates its work.

Who We're Looking For

We welcome applications from individuals with diverse backgrounds and strongly encourage you to apply if you fit into at least one of these profiles:

  • Experienced professionals in the fields of technical research, governance research, and/or research management

  • Experienced team managers, especially those with technical or governance backgrounds

  • AI safety & security professionals looking to apply and further develop their management skills, diversify their experience, and/or broaden their impact across a spectrum of projects

  • Coaching and/or managing people in a supportive way, especially in the context of supporting technical researchers and/or scientists

  • Engineering product/project managers from tech-focused organizations or a STEM industry

If you do not fit into one of these profiles but think you could be a good fit, we are still excited for you to apply!

Key Responsibilities

  • Work with world-class academic & industry mentors to:

    • Help their AI safety mentees (MATS scholars) stay productive and grow in their research skills

    • Drive AI safety research projects from conception to completion

    • Facilitate coordination and collaboration between scholars, mentors, and other collaborators

    • Organize and lead efficient research meetings

  • Work with developing AI safety researchers to:

    • Provide guidance and feedback on research directions, writeups, and planning out career trajectories

    • Connect them with relevant resources and domain experts to support their research

  • Contribute to the strategic planning and development of MATS:

    • Spearhead internal projects - past projects have included the design of program deliverables and development of upskilling resources

    • Build, improve, and maintain the systems and infrastructure that MATS requires to run efficiently

    • Provide input into strategy discussions

Essential Qualifications and Skills

  • 2-5 years experience across a combination of the following:

    • Project management

    • Research management (not necessarily technical)

    • People management

    • Technical research

    • Governance or policy work

    • Mentoring

  • Excellent communication skills, both verbal and written

  • Strong listening skills and empathy

  • Strong critical thinking and problem-solving abilities

  • Ability to explain complex concepts clearly and concisely

  • Proactive and self-driven work ethic

  • Capacity to elicit and solve blockers in an efficient and rapid manner

  • Proactively enjoy working with others, stakeholder management

  • Strong alignment with our mission, and familiarity with the basic ideas behind AGI safety and catastrophic AI risk

Desirable Qualifications and Skills

We expect especially strong applicants to have deep experience in at least one of the following areas:

  • Demonstrated excellence in managing teams of people and/or multi-stakeholder projects

  • Deep familiarity with some subset of AI safety concepts and research

  • Experience in ML engineering, software engineering, or related technical fields

  • Experience in technical AI safety research, AI policy, security, or governance

  • Background in coaching or professional development

  • PhD or similarly extensive academic research experience

What We Offer

  • Opportunity to help accelerate and launch the careers of highly talented early-career AI Safety researchers

  • Opportunity to be intimately involved in impactful research projects: some Research Managers have been acknowledged as co-authors on the conference papers they supported

  • Chance to work with and learn from top researchers in the field

  • Professional development and skill enhancement opportunities, e.g. deep-dives into bleeding-edge AI tools for research acceleration

  • Collaborative and intellectually stimulating work environment

  • Access to our office spaces in Berkeley, London, and other partnered office spaces

  • Private medical insurance, including dental and vision

  • Visa support for US roles

  • Meals provided Monday-Friday

Compensation

Compensation will be $130,000-$200,000 annually for Berkeley roles and £85,000-£130,000 for London roles, depending on experience and location. 

Working hours and location

40 hours per week. Successful candidates can expect to spend most of their time working in-person from our main offices. Research Managers will be based either in person at our office in Berkeley, California, or in person at our office in Old Street, London. We are open to hybrid working arrangements for exceptional candidates.

How to Apply

Please fill out the form here. If you use an LLM chatbot or other AI tools in this application, please follow these norms. Applications will be reviewed on a rolling basis, with priority given to candidates that apply by October 17th. The anticipated start date for this role is December 1st.

MATS is committed to fostering a diverse and inclusive work environment at the forefront of our field. We encourage applications from individuals of all backgrounds and experiences.

Join us in shaping the future of AI safety & security research!