Safe AI Forum

International coordination to reduce frontier AI risks, with a focus on China and the West. 

Stream overview

There are several clusters of work that scholars could work with us on. 

  1. US-China interstate coordination on frontier AI safety. Projects in this vein could include  (a) researching and developing specific proposals for international agreements that could reduce frontier risks, (b) research that advances understanding of how advanced AI and AGI will affect geopolitics, (c) research that increases the likelihood of successful coordination (e.g., verification of AI agreements)

  1. Collaborative frontier AI safety and governance research projects with international experts. These include topics such as frontier AI regulation and loss of control threat modelling. This would likely involve joining an ongoing project and contributing to it rather than starting an independent project. 

  1. Horizon scanning and identifying other possible topics for coordination between China and the West. 

Mentors

Fynn Heide
Safe AI Forum
,
Executive Director
SF Bay Area
Policy & Governance

Fynn Heide is the Executive Director of Safe AI Forum. He studied at the University of Warwick and has done research on China, and AI Governance.

Isabella Duan
Safe AI Forum
,
Senior AI Policy Researcher
SF Bay Area
Policy & Governance

Isabella Duan is a researcher on AI Policy at Safe AI Forum. She studied at University College London and the University of Chicago, and has previously done research in Existential Risk and AI Governance.

Saad Siddiqui
Safe AI Forum
,
Senior AI Policy Researcher
London
Policy & Governance

Mentorship style

1 hour weekly meetings by default for high-level guidance. We are active on Slack and typically respond within a day for quick questions. 

Representative papers

  1. US-China interstate coordination on frontier AI safety.

https://theoptionspace.substack.com/p/could-if-then-commitments-help-the

https://theoptionspace.substack.com/p/when-have-warning-shots-led-to-international

  1. Collaborative frontier AI safety and governance research projects with international experts.

https://saif.org/wp-content/uploads/2025/04/Bare-Minimum-Mitigations-for-Autonomous-AI-Development.pdf

  1. Horizon scanning and identifying other possible topics for coordination between China and the West. 

https://saif.org/wp-content/uploads/2025/04/GPG.pdf

Scholars we are looking for

Good understanding of international AI governance developments that are relevant to frontier AI safety (e.g., the Summit series, AISI network)

Good understanding of Chinese AI governance and safety (key players, key trends and institutional structures) 

Good understanding of key frontier risk domains (CBRN, cyber, loss of control) 

Some understanding of broader US-China relations and how they frame US-China relations on AI/AGI specifically

Scholars will by default collaborate with someone from the Safe AI Forum, though the exact person will vary based on the project they pick. They are also welcome to work with other collaborators (we are happy to suggest names or have scholars find their own collaborators)

Project selection

We will provide a shortlist of projects that we are keen for the scholar to work on in Week 1. We'll ask scholars to scope these in the 1st week and make a determination about which project to focus on in Week 2.