Meet the MATS team

MATS is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, governance, and security. The main goal of MATS is to grow the AI safety & security research fields.

Leadership

Ryan is Co-Executive Director of MATS, a Co-Founder and Board Member of the London Initiative for Safe AI (LISA), a Manifund Regrantor, and advisor to Halcyon FuturesCatalyze ImpactAI Safety ANZ, and Pivotal Research. Previously, he completed a PhD in Physics at the University of Queensland (UQ) and conducted independent research in AI alignment for the Stanford Existential Risks Initiative.

Christian is Co-Executive Director of MATS and Co-Founder of the London Initiative for Safe AI (LISA). Previously, he studied particle physics and pedagogy at Stanford University, worked in operations at multiple organizations, performed research at CERN, and organized educational programs like the Uncommon Sense Seminar.

Research Team

Laura leads the Research team at MATS. Having joined at the end of 2022, she holds responsibility for ensuring high-impact scholar outcomes and distribution of research resources. The mission of her team is to accelerate the research impact of the program through facilitation of all parts of the research process, including ideation/exploration, research operations, and collaboration.

A Thiel Fellow alum from 2017, Laura studied physics at the University of Waterloo and co-founded a successful stem cell cryogenics startup before pivoting into research, consultation, and software engineering. She brings experience in ML model dataset creation and training, management, entrepreneurship, full-stack software engineering, and biomedical research.

Bryce is a Research Manager at MATS where he works to empower scholars and mentors to achieve better program outcomes. He particularly emphasizes strategic clarity and the building of foundational habits and skills. Prior to MATS, Bryce was a Software Engineer and a productivity coach, and has started EA and AI Safety groups in San Diego and Mountain View. He holds an M.S. in Computer Science and Engineering from UC San Diego.

Jeffrey is a Research Manager at MATS. He has worked on AI policy research, focusing on the historical development of other technologies, and how society reacted to them, in order to better understand and shape potential AI policies. Before this, Jeffrey was working towards fusion power. He has a background in physics, particularly plasma physics, turbulence, and chaos theory.

Jason is a research manager at MATS, where he helps scholars pursue research that suits their skills and interests. He particularly enjoys helping scholars improve the impact of their policy and governance research. Before joining MATS, Jason led the Center for AI Policy (CAIP), which lobbied Congress to pass strong AI safety legislation. He has also worked as a legal compliance counselor and as a product safety litigator, giving him a firsthand view of how laws are (and are not) applied in everyday situations. Jason holds a JD from Harvard Law and a BA in Political Science from Yale University.

Jonathan is a research manager at MATS, where he offers support for strong research outputs and scholar outcomes. Previously, he earned his PhD in mathematics from the University of Southern California. His motivation for AI Safety originates from his time at Duke’s Effective Altruism Club, while more recently, he’s participated in the AISF alignment course, SPAR, and USC’s AI Safety Club.

Elise brings dual expertise in AI safety/governance and biosecurity, having conducted in-depth research across both fields. A Clarendon Scholar and PhD candidate at Oxford University’s Institute for Ethics in AI, she was previously a Teaching Fellow with BlueDot Impact and has advised decision-makers (e.g., the UN, US agencies, startups) on AI risks, policy, adoption, and oversight. Her career spans multiple sectors, including founder and think tank experience. Elise holds an Executive MPA from the Hertie School, an MSc from the London School of Economics, and a BA from Stanford University.

Claire has taken part in multiple research and artist residencies, academic research in neuroscience, work in data science, research project work in MATS 3.0 and 7.0, and time as a research lead in AI Safety Camp. She is also the founder of Athena, a mentorship program for women in AI safety research. She holds a Masters in Cognitive Science from Columbia University.

Maria is a research manager, specialising in AI governance. She’s pursuing a PhD in political science at the University of Oxford. Before joining MATS, she has worked at the Centre for European Policy Studies, with a focus on EU AI policy and deliberative democracy. She has previously co-organised a number of technical alignment programs and retreats, including an iteration of MLAB at Oxford and the Schelling residency.

Cory is a Research Manager at MATS, where she helps scholars and mentors do great research, and become better researchers. She brings 15 years of experience in industry. Most recently she spent a decade at Uber building and leading science and research teams, in the areas of policy research, algorithmic fairness, and marketing measurement. Before that, she worked as a data scientist at Beats Music and Google. Cory holds a BA in Cognitive Science from Dartmouth College.

Chris is a Research Manager at MATS. Prior to joining MATS, he worked for Google as a Quantitative UX Researcher for 12 years and earned a PhD in Neuroscience from Johns Hopkins and an MS in CS from the University of Southern California. He's been in the AI safety space since early 2024, conducting independent AI safety research and serving as a teaching fellow for Bluedot courses, a SPAR mentor, and a research manager.

Skylar is a research manager at MATS. Prior to joining MATS he worked for 7 years as a data analyst for the Seattle Mariners, particularly focusing on in-game strategy and staff education. He transitioned careers in 2025 after participating in AI Safety courses from BlueDot Impact and AI Safety Camp. He holds a B.A. in statistics from Yale University. Outside of work Skylar loves playing all sports, including most recently golf, pickleball, and bowling.

Jinghua is a Research Manager at MATS. She completed her PhD at the University of Hong Kong and later continued her research training at the University of Chicago, working at the intersection of neuroscience, linguistics, and speech perception. Before joining MATS, she worked at a nonprofit research funder, where she led a portfolio of AI applications in clinical and health research—an experience that deepened her interest in responsible AI deployment and governance. Conceptually, she sees strong parallels between mechanistic interpretability and the questions of neural representation she explored in research. Together, these experiences motivate her commitment to AI safety.

Nathan is a Research Manager at MATS. He has a background in neuroscience, data science, and ML engineering. He has worked on AI Safety research in AI Control, Sandbagging Detection, and Biorisk Evals (WMDP).

Srija is a Research Manager at MATS. She received her PhD from Arizona State University focusing on machine learning for satellite data. She then worked at NASA Goddard as a postdoc on ML for earth science and later as a research scientist (AI/ML PI) at USRA/NASA, contributing to NASA’s AI-driven science, fine-tuning and red-teaming efforts. More recently, she has been interested in the broader impact of AI and in assessing vulnerabilities in the AI lifecycle to improve AI safety and governance. At MATS she contributes to that by supporting AI safety research scholars and mentors.

Program Team

Juan leads the Program Team at MATS. The team is responsible for fellow and mentor selection, impact analysis, and other program strategy. Prior to MATS, Juan worked as a professional community builder for the Boston AI safety and Effective Altruism communities. He holds a B.S. in Mathematics with Computer Science from MIT (2020).

Sanyu runs the MATS application process. He has conducted technical research in AI safety and biosecurity, focusing on evaluations and threat modeling of large language models and biological design tools. He works part-time at SecureBio on AI-Bio evals, and worked on situational awareness evals during the UChicago XLab fellowship. He also co-founded and led his university's AI safety group. He holds a degree in computational biology from Brown University.

Compute Team

Iftekhar is the Research Compute Lead at MATS, where he manages the compute infrastructure supporting fellows' research. His work spans GPU cluster operations, cloud compute platforms, research engineering support, and enabling technically demanding AI safety projects.A physician by training, Iftekhar spent four years at Johns Hopkins conducting cardiology research focused on heart disease epidemiology and statistics (H-index: 23). Prior to MATS, he worked on vision-language models and as a robotics engineer specializing in industrial collaborative robots. His interests center on building reliable, secure systems that enable high-impact AI safety research and engineering.

Pratiksha is the Compute Administrator at MATS, where she ensures scholars have seamless access to the computational resources needed for AI alignment and safety research. With a Master's in Data Science from George Washington University and five years of experience in technical operations, Pratiksha brings expertise in system administration, data-driven decision making, resource optimization, and automation.

Before joining MATS, she led technical teams at Accenture and managed enterprise systems at Colgate-Palmolive, developing scalable solutions and data-driven workflows. Pratiksha is passionate about enabling cutting-edge research through robust infrastructure and is driven by a commitment to AI safety and MATS' mission of reducing risks from advanced AI.

Ganesh is a Compute Administrator at MATS, where he supports fellows by setting up research environments, tuning workflows, and reducing infrastructure friction.

Before MATS, he worked in enterprise cluster administration and at a startup doing backend and cloud engineering.

Outside work, he enjoys hiking, badminton, cooking, and aquarium setups.

Community Team

Jana leads the Community Team at MATS, which supports the well-being of the MATS community. The team works to build a safe, supportive, and nurturing environment where scholars can flourish as individuals. They also help facilitate connection with the broader AI Alignment community. Jana studied biology at Harvard and biostatistics at UNC, and she brings experience in project management, data analysis, writing, and community building. 

John is a seasoned program and product manager with decades of experience working on technology projects at Intel and Microsoft. In 2023, seeing that AI capability was growing at an alarming rate, John determined to make a career transition to AI safety, first by working as an affiliate for Stanford AI Alignment (SAIA), and then by participating in CAIS' AI Safety, Ethics and Society, and Blue Dot Impact's AI Fundamentals AI Safety courses. John aims to apply his skills and experience to advance field building in AI safety and alignment as part of the Community team at MATS. He possesses a BA in Economics from WCU and an MBA with a concentration in international finance from the Shidler School of Business at the University of Hawaii.

Nate is a Community Manager at MATS, where he works to strengthen community connection and support scholar well-being. With a background in startups and people operations, he brings a passion for building communities where people feel valued, understood, and empowered to do their best work. Nate studied government at Harvard and holds a BA in Business Administration from Morehouse College.

Vivian is a Community Manager at MATS where she provides well-being support for scholars and organizes events. Prior to MATS, she worked at Google within staffing operations and is involved in South Bay grassroots organizing. She holds a BS in Public Health from San Jose State University.

London Team

Matthew runs the London team of MATS, which owns coordinating and planning the extension phase of the programme.

Perusha is a research manager at MATS London, where she provides technical guidance to scholars across multiple alignment research areas. She holds a PhD in Deep Reinforcement Learning, which sparked her interest in mechanistic interpretation. She subsequently specialized in AI Safety through the AISF alignment course, AISC, and various hackathons and deep dives. Perusha has a BSc and MSc in Mechanical Engineering and 18 years of experience as a senior software developer.

Zohreh is a research manager at MATS London. She is also an affiliated lecturer in the University of Cambridge. Her research focuses on interpretability and its applications for knowledge discovery. Previously, she was the CSO at Leap Laboratories, a spin out of MATS, where she led research. She brings experience from both academia and industry. She is excited to help scholars make the most out of their AI safety research ranging from high impact publications to entrepreneurship.

Henning is a Research Manager at MATS London where he focuses on technical guidance and professional coaching to optimize scholars’ goals. Prior to MATS, he worked as an ML engineer for 3 years while publishing technical AI safety research and maintaining a coaching practice on personal development since 2017. He combines his technical and coaching experience to empower scholars’ research and professional growth.

Sofia manages Operations at MATS in London, supporting the team and scholars with day-to-day office management, process set-up, and future planning. She previously worked as a program manager in legal and sales operations and has been on the path to AI safety since she read The Alignment Problem in 2022. She is currently pursuing an MSc in Cognition and Computation.

Keivan is a Research Manager at MATS London, where he provides technical guidance, AI safety mentoring, and professional development support to help scholars advance their research and career goals. He is also a Professor of AI at Lancaster University and serves as a Scientific Advisor at The Alan Turing Institute. Before joining MATS, he held senior advisory roles, including Principal Technology Advisor at the UK Information Commissioner’s Office. Keivan’s work spans AI safety, accountability, and compliance in high-stakes applications, and he combines his academic and policy expertise with a commitment to fostering responsible innovation and scholar development within the MATS community.

Lovkush is a research manager at MATS London, where they provide technical guidance to scholars across multiple alignment research areas. They first heard about existential risks from AI in the mid-2010s, but only decided to pivot their career with the recent rise of LLMs. In 2024 and 2025, Lovkush upskilled by taking part in various programs (e.g. ML4Good, SPAR, AI Safety Camp, ARENA, Algoverse) and helped TA a few too (ML4Good, ARBOx, BlueDot, ARENA). They have a background in pure mathematics (MMath and PhD, 2008-2016), then lectured maths (2017-2020), and was a data scientist (2021-2024).

Rich is a research manager in London and a former MATS scholar. He completed his Master's in Mathematical and Theoretical Physics at Oxford (2013-17), focusing on plasma physics and nuclear fusion. He found AI safety through 80,000 Hours four months into his Physics PhD at Imperial, which motivated him to leave the PhD in early 2018 to upskill as a software engineer instead. With strong engineering skills, he began transitioning into AI safety by completing AISF in 2024 and ML4Good in 2025, and worked with Marius Hobbhahn on black-box monitoring in MATS 8.0 and 8.1. He has also taught at ML4Good.

Patryk Wielopolski is a Research Manager at MATS London, where he supports scholars' technical and professional development. Prior to MATS, he spent 6 years at DataWalk building their AI research agenda, leading a team of research engineers, and architecting AI solutions for Fortune 500 companies and government agencies. He holds a Ph.D. in AI with publications in AAAI, TPAMI, and ECAI, focusing on probabilistic models. He also co-organizes AI Safety Poland community.

Iva coordinates Operations at MATS in London, supporting the team and fellows with process management, office set-up, and operational planning. She previously worked at the Columbia Justice Lab, where she managed and contributed to research projects on criminal justice and inequality in the United States. She is currently also a Research Fellow at the Administrative Fairness Lab where she researches the use of AI in the UK’s social care system.

Fred is a research manager at MATS in London, providing technical guidance and coaching to help MATS fellows do great research and get the most out of the program. Prior to MATS he worked as a ML research engineer in the music and music technology industries and completed a PhD working on AI-assisted music creation at Queen Mary University in London. He's excited to apply his expertise in applied machine learning research and research management in industry to support fellows advance research in AI safety.

Fred is a Research Manager at MATS London, where he is excited to support fellows with their short-term projects and longer-term professional development. In his ten-year actuarial career, he built statistical risk frameworks, communicated technical analysis to busy executives, and scaled up an insurance pricing team to quantify £billions of liabilities every month. Fred studied Materials Science at Oxford University (2010-2014), where he researched techniques to improve the quality of silicon wafers.

Operations Team

Jim Chapman leads the MATS Operations Team, which builds and maintains the organizational infrastructure that enables MATS to effectively support the needs of its AI researchers. He brings extensive senior management and operations leadership experience from roles in the nonprofit, academia, and consulting sectors. Jim’s interest in AI, rationality, and effective altruism began through conversations with his son—a curiosity that deepened into concern and ultimately a personal commitment to ensuring AI benefits humanity. Joining MATS is his way of putting that commitment into action.

Kali is an Operations Generalist at MATS, where she closely interfaces with scholars, responds to emergent housing needs, and provides operational support. She holds a BA in political science from Columbia University, where she was president of Columbia EA and facilitated AI Governance fellowships for the Columbia AI Alignment Club.

Raj is an Operations Generalist at MATS. He has extensive operations experience from contracting for effective altruist organizations in the Bay. His roles ranged from furnishing from scratch a 40 bedroom dormitory to doing logistics for summits and research training camps. Prior to moving to the bay, he studied at Waseda University in Japan. 

Joseph works as an Operations Generalist at MATS, supporting the organization with project management and process optimization. Before joining MATS he worked in operations for EA organizations, for cross-cultural non-profits, and for education-focused startups. He studied Chinese language and culture as an undergrad, spent a decade in Beijing, and later earned a master’s degree in management. He is also a certified project manager through the Project Management Institute. Outside of work Joseph runs several books clubs, is a trained yoga instructor, and loves eating Sichuan dry pot.

Eric is an Operations Generalist at MATS. Prior to joining MATS, Eric worked at public media nonprofits PRX and KQED, where he managed events, workshops, and community programs for podcasters, journalists, and multimedia storytellers. Eric is passionate about building human-centered workflows and processes that foster collaboration and community.

Jarrett is an Operations Generalist at MATS, where he supports scholars, staff, and mentors with day-to-day operational needs, systems building, and workflow optimization. He studied engineering and entrepreneurship at Dartmouth and worked at several medical device and biotech startups before pivoting to focus on supporting AI Safety research at MATS.

Board of Directors

Josh Jacobson is the Chief Operating Officer at the Future of Life Foundation. He has worked across organizational leadership, operations, and/or research for METRFAR AI, Anthropic, on a research grant, for the Future for Humanity Institute (EpiFor), the Berkeley Existential Risk Initiative, the World BankSocialCopsInnovations for Poverty Action, the Centre for Effective Altruism, a tech startup, and PepsiCo. Josh holds an MPA with a focus in Econometrics in International Development from Columbia University’s School of International and Public Affairs and a Bachelor’s in International Relations and Government from Dartmouth College.

Abi Olvera is Research Director at the Golden Gate Institute for AI, founder of DC Abundance, and an Emergent Ventures grantee. A former U.S. diplomat with over a decade of service, she focuses on the intersection of governance and technology. She serves as the Bulletin of the Atomic Scientists’ disruptive technologies editorial fellow.

Michael is the AI Program Director at Longview Philanthropy. Before joining Longview, he was an associate director at the RAND Center on AI, Security, and Technology. He joined its predecessor in its early months and helped the center grow to 100+ people. He is also an affiliate at the Centre for the Governance of AI, an advisor and cofounder of the Institute for AI Policy and Strategy, and a board member at the MATS Program. He was previously a Research Scholar at the University of Oxford’s Future of Humanity Institute.

Frequently asked questions

What is the MATS Program?
Who are the MATS Mentors?
What are the key dates of the MATS Program?
Who is eligible to apply?
How does the application and mentor selection process work?