MATS mentors are advancing the frontiers of AI alignment, transparency, and security

Mia Taylor joined Forethought after three years as a Researcher and then Interim Research Director at the Center on Long-term Risk. She received a bachelor’s degree in Mathematics and Computer Science from Harvey Mudd College.

Focus:
Policy and Strategy
AI Welfare, Strategy and Forecasting, Policy and Governance
Programs:

Shi Feng leads a research group working on oversight and control. He is an assistant professor at George Washington University. Prior to that, he was a postdoc in the NYU Alignment Research Group under Sam Bowman. He currently focuses on deception and collusion, with an emphasis on propensity and evaluation realism.

Focus:
Empirical
Control, Scalable Oversight, Red-Teaming, Model Organisms, Monitoring
Adam Shai
Simplex
,
Research Lead

Adam Shai has extensive research experience in experimental and computational neuroscience. He earned his PhD from Caltech and has over a decade of experience investigating the neural basis of intelligent behavior, most recently as a researcher at Stanford. Driven by the pressing need for AI safety, he has now turned his expertise to neural networks, aiming to develop principled methods for controlling and aligning increasingly advanced AI systems.

Adam co-founded and now leads research at Simplex, an organization dedicated to building a science of representations in AI systems.

Focus:
Empirical
Interpretability

George Robinson is a researcher at the Alignment Research Center (ARC), which is working on a systematic and theoretically grounded approach to mechanistic interpretability. Before joining ARC, he was a PhD student at Oxford University specialising in Algebraic Number Theory. He lives in London, and is a member of the London Initiative for Safe AI (LISA).

Focus:
Theory
Interpretability
Programs:
Daniel Murfet (Dan)
Timaeus
,
Director of Research

I was until recently a professional mathematician at the University of Melbourne, where I worked on algebraic geometry, mathematical logic, some aspects of mathematical physics, and most recently statistical learning theory. As of early 2025 I left academia to direct research at Timaeus on AI safety.

Focus:
Empirical
Interpretability, Model Organisms, Red-Teaming, Safeguards, Scheming and Deception
Matthew Gentzel
Longview Philanthropy
,
Nuclear Weapons Policy Program Officer

Matthew Gentzel is a Nuclear Weapons Policy Program Officer at Longview Philanthropy where he works on grantmaking and priorities research related to reducing the risk of large-scale nuclear war. Roughly half of his grantmaking budget concentrates on AI and emerging tech-related nuclear risk issues, where he investigates risks and opportunities related to AI-enabled targeting, information manipulation, and how perceptions of future AI capability impact escalation control in the near-term.

His prior work spanned emerging technology threat and policy assessment, with a particular focus on how advancements in AI may shape the future of influence operations, nuclear strategy, and cyber attacks. He has worked as a policy researcher with OpenAI, as an analyst in the US Department of Defense’s Innovation Steering Group, and as a director of research and analysis at the US National Security Commission on Artificial Intelligence. 

Mr. Gentzel holds an MA in strategic studies and international economics from Johns Hopkins School of Advanced International Studies, a BS in fire protection engineering from the University of Maryland College Park.

Focus:
Policy and Strategy
Policy and Governance, Strategy and Forecasting
Programs:
Saad Siddiqui
Safe AI Forum
,
Senior AI Policy Researcher
Focus:
Policy and Strategy
Policy and Governance
Programs:

Oly works at the Future of Life Foundation on sourcing and developing ambitious ideas to build a flourishing future, grounded in realistic scenarios for AI and other technological development. Priorities include human collective intelligence uplift, gentle and manageable multiagent transitions, and defensive tech.

Oly previously worked on loss of control risk modelling and evaluation at the UK AI Safety/Security Institute and continues to engage with the OECD, UK FCDO, DSIT, and parliamentarians on AI governance.

He researched (LM) agent oversight and multiagent safety at Oxford and was one of the first beneficiaries of the MATS program in 2021-22. Before his AI safety work, he was a senior data scientist and software engineer.

Focus:
Empirical
Dangerous Capability Evals, Compute and Hardware, Policy and Governance, Strategy and Forecasting
Wilson Wu
ARC
,
Researcher

Wilson Wu is a researcher at the Alignment Research Center (ARC), which is working on a systematic and theoretically grounded approach to mechanistic interpretability. He has previously worked on alternate approaches to interpretability including compact proofs and applications of singular learning theory.

Focus:
Theory
Interpretability
Programs:
Focus:
Empirical
Monitoring, Adversarial Robustness, Control, Model Organisms, Red-Teaming, Dangerous Capability Evals, Safeguards
Programs:

Victor Lecomte is a researcher at the Alignment Research Center (ARC), which is working on a systematic and theoretically grounded approach to mechanistic interpretability. He holds a PhD from Stanford University, where he did research in computational complexity and other areas of theoretical computer science before pivoting to AI safety research.

Focus:
Theory
Interpretability
Programs:

Zainab is the co-founder of Asymmetric Security. She was previously a cybersecurity analyst at Stroz Friedberg, where she investigated some of the largest cybersecurity breaches of the past decade (e.g., Cambridge Analytica). She has also published at NeurIPS on AI cybersecurity evaluations. Zainab holds a master’s degree in Physics from Oxford University.

Focus:
Empirical
Security, Dangerous Capability Evals
Programs:
Jesse Hoogland
Timaeus
,
Executive Director

Jesse is the co-founder and executive director of Timaeus, an AI safety non-profit researching applications of singular learning theory (SLT) to AI safety, particularly for interpretability and alignment. Jesse comes from a background in physics, and leads several research projects at Timaeus, in addition to being involved in outreach and operations. 

Focus:
Empirical
Interpretability, Model Organisms, Red-Teaming, Safeguards, Scheming and Deception
Michael Winer (Mike)
ARC
,
Research collaborator

Mike Winer is a researcher at the Alignment Research Center (ARC), where he studies how mechanistic estimates can beat black-box techniques in toy setups. His background is in statistical physics, where he studies how many objects obeying simple rules can exhibit complex behaviors like magnetism, glassiness, or scoring 87% on GPQA.

Focus:
Theory
Interpretability
Programs:
Pierre-Luc St-Charles
LawZero
,
Senior ML Research Scientist

Pierre-Luc St-Charles is a researcher and developer specializing in applied machine learning with over a decade of experience across different non-profit institutes. He has held research roles at the Computer Research Institute of Montréal and senior research roles at Mila, collaborating with industrial partners and multidisciplinary academic teams on innovative projects in natural resources, transportation, digital media, document intelligence, and earth observation. Pierre-Luc earned his PhD in Computer Vision from Polytechnique Montréal in 2018, receiving the departmental Best Thesis Award. Since 2024, he has joined LawZero, a Mila-incubated organization focused on developing safe AI technologies. He is currently focused on building benchmarks and evaluation methodologies for frontier AI systems.

Focus:
Empirical
Agent Foundations, Dangerous Capability Evals, Monitoring, Control, Red-Teaming, Scalable Oversight
Programs:
Damiano Fornasiere
LawZero
,
Senior AI safety research scientist

Damiano is a research scientist at LawZero, where he works on (i) the maths behind the Scientist AI, (ii) model organisms to study elicitation, (iii) interpretability and evaluation techniques for situational awareness and introspection.

Focus:
Empirical
Agent Foundations, Dangerous Capability Evals, Monitoring, Control, Red-Teaming, Scalable Oversight
Programs:
Isabella Duan
Safe AI Forum
,
Senior AI Policy Researcher

Isabella Duan is a researcher on AI Policy at Safe AI Forum. She studied at University College London and the University of Chicago, and has previously done research in Existential Risk and AI Governance.

Focus:
Policy and Strategy
Policy and Governance
Programs:
Oliver Richardson (Oli)
LawZero
,
Senior ML Research Scientist (LawZero) / Postdoctoral Fellow (UdeM)

OIi(ver) is a computer scientist (a staff member at LawZero and postdoc under Yoshua Bengio) with unusually broad scientific and mathematical expertise.

He is a sucker for pretty demos and grand unifying theories—unfortunately, sometimes losing sight of what is practical. Over the last few years (i.e., during his PhD at Cornell), Oli has discovered a beautiful theory describing how a great deal of artificial intelligence, classical and modern, can be fruitfully understood as resolving a natural information-theoretic measure of epistemic inconsistency. There remain many unanswered questions, but the hope is that this already much clearer view can lead to powerful generalist AI systems that are safer because they fundamentally do not meaningfully have goals or desires.

Focus:
Empirical
Agent Foundations, Dangerous Capability Evals, Monitoring, Control, Red-Teaming, Scalable Oversight
Marc-Antoine Rondeau
LawZero
,
Senior Machine Learning Research Scientist

Marc-Antoine is a Research Scientist at LawZero. His main area of expertise is NLP and applied ML, and he is currently applying this to AI safety projects.

His research areas include interpretability and evaluation.

Focus:
Empirical
Agent Foundations, Dangerous Capability Evals, Monitoring, Control, Red-Teaming, Scalable Oversight
Programs:
Jean-Pierre Falet
LawZero
,
Machine Learning Research Scientist

Jean-Pierre is a machine learning research scientist at LawZero, focused on designing model-based AI systems with quantitative safety guarantees. His primary interests are in probabilistic inference in graphical models, and he draws inspiration from his multidisciplinary background in neurology and neuroscience, which informs his understanding of human cognition. Jean-Pierre studied at McGill University, obtaining a medical degree in 2017, completing a neurology residency in 2022, and earning a master's degree in neuroscience in 2023. During his master’s, he developed causal machine learning methods for precision medicine. Concurrently with his work at LawZero, Jean-Pierre is completing a PhD in computer science at Mila and Université de Montréal, supervised by Yoshua Bengio. In addition to contributing to the foundations of guaranteed-safe AI, Jean-Pierre is passionate about translating advances in AI into clinically meaningful, safety-critical applications.

Focus:
Empirical
Agent Foundations, Dangerous Capability Evals, Monitoring, Control, Red-Teaming, Scalable Oversight
Programs:

Frequently asked questions

What is the MATS Program?
Who are the MATS Mentors?
What are the key dates of the MATS Program?
Who is eligible to apply?
How does the application and mentor selection process work?