How to get into MATS

Interested in applying to MATS and want to know more about the expectations? Want to learn ways of improving your odds of being selected? Below, we've compiled some advice to help you put your best application forward.

Who are we looking for?

The MATS program seeks individuals who are deeply motivated by AI security, alignment, and governance and are equipped with the skills and mindset to contribute to cutting-edge research.

With only about 4-7% of applicants ultimately selected, the program is highly competitive. Successful applicants come from diverse backgrounds—ranging from undergraduates to industry professionals—but the strongest candidates demonstrate clear mission alignment, a track record of research or technical work relevant to their chosen stream, and credible references. The application process involves both general and stream-specific components, often including assessments like coding tests or research work samples. If you’re aiming to apply, focus on building visible evidence of your skills, pursuing relevant research experiences, and connecting with the AI alignment community. With proactive preparation, you can significantly boost your chances of joining MATS and contributing meaningfully to the field.

Tips for Prospective Applicants

1. Understand the Mission & Mindset

MATS is for people committed to AI alignment, security, and responsible governance, not merely a general career in AI. You should be familiar with AI risk models and the broader goals of reducing catastrophic AI risks.

2. Know the Application Process

1. General Application

Submit one unified application covering your background, interests, and the tracks and streams you'd like to apply to. Expect 1-2 hours. You'll also provide two references—we'll reach out to them in later stages. Streams are collections of mentors focused on a research agenda. You can find the streams for Summer 2026 here.

2. Additional Evaluations

Depending on your tracks and streams, you may be asked to complete further assessments:

  • Empirical track applicants complete a code screening in stage two
  • Other tracks may involve work tests, project proposals, or intermediate interviews

3. Mentor Interviews

If you advance, you'll interview directly with mentors who are interested in working with you.

4. Offers

Mentors select fellows based on their own criteria, and we send out offers.

3. What MATS Mentors Look For

MATS scholars come from a variety of different backgrounds, including:

  • Industry professionals transitioning into AI alignment/security/governance.
  • Undergraduates with exceptional research potential.
  • PhD students or recent PhD graduates in relevant fields.
  • Governance or policy professionals pivoting into technical AI security work.

Key qualities:

  • Mission alignment with AI alignment goals.
  • Evidence of research ability—ideally in relevant areas.
  • Outputs such as publications, blog posts, open-source projects, or substantial research contributions.
  • Ability to learn quickly and produce meaningful results.

4. How to Strengthen Your Profile

Gain Research Experience:

  • Join research programs like SPAR, PIBBSS, or other AI alignment resources (see below).
  • Complete a Masters or PhD in AI safety with a strong supervisor.

Develop Technical Skills:

  • Improve your ML engineering and software engineering abilities.
  • Join technical boot-camps like ARENA or ML4Good.
  • Practice coding challenges if applying to technical streams that will require a code screening.

Make Your Skills Visible:

  • Publish blog posts.
  • Share research findings publicly.
  • Cultivate strong references.

Use AI Tools to Boost Your Productivity:

  • Leverage modern AI tools (like large language models) to understand research papers, brainstorm, and accelerate learning.

Explore Educational Resources:

Other AI Alignment, Security and Governance Resources

Programs:

Careers:

We recommend 80,000 Hours’ career guide and podcast series. The 80,000 Hours Job Board, and the EA Opportunities Board list jobs relevant to AI risk mitigation. Emerging Tech Policy Careers offers expert career advice and resources for those looking to work in US AI policy. Also, we recommend this post on building aptitudes.

Funding:

The Long Term Future Fund, Open Philanthropy, Manifund, Founders Pledge, Cooperative AI Foundation, and Foresight Institute can fund you to pursue independent research projects or develop your skills. Y Combinator has a request for startups in explainable AI. Entrepreneur First has a def/acc startup accelerator program.

Academia:

If you plan to write a thesis, or are doing other independent research, Effective Thesis can help you find a high-impact topic along and mentorship. We recommend reaching out to these academics for research projects in AI alignment.

Community:

  • LessWrong is a great platform for sharing ideas and getting feedback from the community and the AI Alignment Slack is good for finding collaborators.
  • For office space, you can apply to FAR Labs in Berkeley and LISA in London.

Resources:

See Arkose’s, BlueDot Impact’s, and AISafety.com’s lists. For staying informed about the latest developments in AI alignment, we recommend the Don’t Worry About the Vase, Transformer, ACX, and Import AI blogs and the 80,000 Hours, Dwarkesh, AXRP, and Inside View podcasts.