Getting into the MATS Program
Are you interested in applying to MATS and want to know more about the expectations? Are you interested in improving your odds of being selected?
The MATS program seeks individuals deeply motivated by AI safety and equipped with the skills and mindset to contribute to cutting-edge research. With only about 4-7% of applicants ultimately selected, the program is highly competitive. Successful applicants come from diverse backgrounds—ranging from undergraduates to industry professionals—but the strongest candidates demonstrate clear mission alignment, a track record of research or technical work relevant to their chosen stream, and credible references. The application process involves both general and stream-specific components, often including assessments like coding tests or research work samples. If you’re aiming to apply, focus on building visible evidence of your skills, pursuing relevant research experiences, and connecting with the AI safety community. With proactive preparation, you can significantly boost your chances of joining MATS and contributing meaningfully to the field.
Tips for MATS Applicants
1. Understand the Mission & Mindset
MATS is for people committed to AI alignment, security, and responsible governance, not merely a general career in AI.
You should be familiar with AI risk models and the broader goals of reducing catastrophic AI risks.
2. Know the Application Process
Pre-Application: General questions about your motivation, background, and interest in AI safety.
Stream-Specific Applications: Applications to work with specific groups of mentors for research areas like interpretability, governance, or empirical alignment. Note: both pre-application and stream specific applications are required.
Possible Assessments (varies by stream):
Many technical streams require coding tests (e.g. CodeSignal).
Some streams require work tests. For example, Neel Nanda’s stream application includes a work test of ~12 hours of research effort.
Some streams use selection questions testing reasoning or technical skills.
Most streams conduct interviews at the final stages.
References Matter:
Strong, specific references from known researchers, professors, or reputable professionals are significant signals of fit.
3. What MATS mentors look For
MATS scholars come from a variety of different backgrounds, including:
Industry professionals transitioning into AI safety.
Undergraduates with exceptional research potential.
PhD students or recent PhD graduates in relevant fields.
Governance or policy professionals pivoting into technical AI safety work.
Key qualities:
Mission alignment with AI safety goals.
Evidence of research ability—ideally in relevant areas.
Outputs such as publications, blog posts, open-source projects, or substantial research contributions.
Ability to learn quickly and produce meaningful results.
4. How to Strengthen Your Profile
Gain Research Experience:
Join research programs like SPAR, Cambridge/Boston AI initiatives, or other AI safety resources (see below)
Develop Technical Skills:
Improve your ML engineering and software engineering abilities.
Practice coding challenges if applying to technical streams that will require a code screening.
Make Your Skills Visible:
Publish blog posts.
Share research findings publicly.
Cultivate strong references.
Use AI Tools to Boost Your Productivity:
Leverage modern AI tools (like large language models) to understand research papers, brainstorm, and accelerate learning.
Explore Educational Resources:
BlueDot Impact AI Safety Fundamentals for newcomers or cross-disciplinary professionals who want to develop more familiarity with AI safety at a high level
ARENA’s Curriculum for ML engineering skill development.
Explore Other Resources
Focus on building clear signals of research ability, mission alignment, and proactive engagement with the AI safety community—and start that journey today!
Other AI Safety Resources
Programs:
Apart Research runs monthly online hackathons.
The following programs have applications:
RAND TASP (Governance; rolling applications)
CHAI Research Fellowship (Rolling applications)
SPAR
ML4Good
Vista Institute for AI Policy
ERA (Governance)
GovAI Fellowship (Governance)
Horizon Fellowship (Governance)
BASIS
LASR Labs
ARENA
Pivotal Research Fellowship
Impact Academy
IAPS Fellowship (Governance)
PIBBSS
MARS
CBAI
Constellation Visiting Fellows
Courses:
BlueDot Impact offers the AI Safety Fundamentals Governance Course. The Global Challenges Project runs regular AI safety workshops with rolling applications. We recommend checking AISafety.com’s courses for self study resources and to stay up to date with new course offerings.
Self-study: We recommend AISF and CAIS’ Introduction to ML Safety Course and textbook. For building technical skills, we recommend ARENA.
Careers:
We recommend 80,000 Hours’ career guide and podcast series. The 80,000 Hours Job Board, and the EA Opportunities Board list jobs relevant to AI risk mitigation. Also, we recommend this post on building aptitudes.
Funding:
The Long Term Future Fund, Open Philanthropy, Manifund, Founders Pledge, Cooperative AI Foundation, and Foresight Institute can fund you to pursue independent research projects or develop your skills. Y Combinator has a request for startups in explainable AI. Entrepreneur First has a def/acc startup accelerator program.
Academia:
If you plan to write a thesis, or are doing other independent research, Effective Thesis can help you find a high-impact topic along and mentorship. We recommend reaching out to these academics for research projects in AI safety.
Community:
LessWrong is a great platform for sharing ideas and getting feedback from the community and the AI Alignment Slack is good for finding collaborators. For office space, you can apply to FAR Labs in Berkeley and LISA in London.
Resources:
See Arkose’s, BlueDot Impact’s, and AISafety.com’s lists. For staying informed about the latest developments in AI safety, we recommend the Don’t Worry About the Vase, Transformer, ACX, and Import AI blogs and the 80,000 Hours, Dwarkesh, AXRP, and Inside View podcasts.