
Model Evals
Megan currently works on the evaluations project at the Alignment Research Center. She’s interested in model evaluation, testing, and tooling. Megan previously worked on independent projects (ML interpretability amongst others) with a grant from the Center for Longterm Risk.
The Summer 2022 cohort was MATS's first full-scale program, with 31 scholars and 7 mentors from leading AI safety organizations including OpenAI, ARC, MIRI, EleutherAI, and Aligned AI. The program ran 5 weeks online followed by 8 weeks in-person in Berkeley, where scholars conducted independent research under expert mentorship, participated in educational seminars, and built community with peers in the Berkeley AI safety ecosystem.
Megan Kinniment