Jesse Hoogland

Timaeus

Jesse Hoogland is the executive director of Timaeus, an AI safety research organization studying developmental interpretability and singular learning theory. He was a MATS scholar during MATS 3.0 and 3.1 in Evan Hubinger's Deceptive AI stream. During this period, he became interested in understanding how AI systems develop during training. This led to him helping to organize the SLT and Alignment conference and the DevInterp conference, which resulted in the developmental interpretability research agenda.

The Winter 2022-23 cohort supported 58 scholars with 17 mentors including researchers from Anthropic, MIRI, ARC, Redwood Research, and other leading organizations. This cohort introduced the Scholar Support team to provide research coaching and unblocking assistance to scholars throughout the program. The program ran 6 weeks online followed by 2 months in-person in Berkeley and featured scholar-led activities including study groups on mechanistic interpretability and linear algebra, weekly lightning talks, and workshops on research tools and technical writing.Notable alumni from this cohort include Marius Hobbhahn, who founded Apollo Research and published work on mechanistic interpretability; Asa, who co-authored papers on measuring situational awareness and the "reversal curse" in large language models; and Jesse Hoogland, who founded Timaeus and developed the developmental interpretability research agenda.

There's life pre-MATS and life post-MATS. It was the inflection point that set me up to become a technical AI safety researcher. I don't think there are other opportunities as good at getting early-career people integrated into AI safety. The in-person program was the most impactful and high-energy two months I've ever been a part of, and it's my number one recommendation to people considering work on AI safety.

Jesse Hoogland