Thomas Larsen

AI Futures Project

Thomas took part in the Summer 2022 Cohort with John Wentworth and the Winter 2023 Cohort with Nate Soares. During this time, he wrote a detailed overview of AI Safety approaches. He continued his SERI MATS work at MIRI, before leaving to found the Center for AI Policy, an AI safety advocacy organization. He is currently a researcher at the AI Futures Project and a guest fund manager at the LTFF.

The Summer 2022 cohort was MATS's first full-scale program, with 31 scholars and 7 mentors from leading AI safety organizations including OpenAI, ARC, MIRI, EleutherAI, and Aligned AI. The program ran 5 weeks online followed by 8 weeks in-person in Berkeley, where scholars conducted independent research under expert mentorship, participated in educational seminars, and built community with peers in the Berkeley AI safety ecosystem.

MATS helped me upskill in alignment at a >3x rate relative to the counterfactual, which was independently learning infra-bayesianism because I liked math and I didn't have an inside view on what parts of alignment was important. MATS caused me to develop a much deeper view of the alignment problem and afterwards I felt like I was able to focus on the most important parts of the problem and biggest sources of confusion within myself.

Thomas Larsen