There are several clusters of work that scholars could work with us on.
Fynn Heide is the Executive Director of Safe AI Forum. He studied at the University of Warwick and has done research on China, and AI Governance.
Isabella Duan is a researcher on AI Policy at Safe AI Forum. She studied at University College London and the University of Chicago, and has previously done research in Existential Risk and AI Governance.
1 hour weekly meetings by default for high-level guidance. We are active on Slack and typically respond within a day for quick questions.
https://theoptionspace.substack.com/p/could-if-then-commitments-help-the
https://theoptionspace.substack.com/p/when-have-warning-shots-led-to-international
https://saif.org/wp-content/uploads/2025/04/Bare-Minimum-Mitigations-for-Autonomous-AI-Development.pdf
https://saif.org/wp-content/uploads/2025/04/GPG.pdf
Good understanding of international AI governance developments that are relevant to frontier AI safety (e.g., the Summit series, AISI network)
Good understanding of Chinese AI governance and safety (key players, key trends and institutional structures)
Good understanding of key frontier risk domains (CBRN, cyber, loss of control)
Some understanding of broader US-China relations and how they frame US-China relations on AI/AGI specifically
Scholars will by default collaborate with someone from the Safe AI Forum, though the exact person will vary based on the project they pick. They are also welcome to work with other collaborators (we are happy to suggest names or have scholars find their own collaborators)
We will provide a shortlist of projects that we are keen for the scholar to work on in Week 1. We'll ask scholars to scope these in the 1st week and make a determination about which project to focus on in Week 2.