This stream will work on projects that empirically assess national security threats of AI misuse (CBRN terrorism and cyberattacks) and improve dangerous capability evaluations. Threat modeling applicants should have a skeptical mindset, enjoy case study work, and be strong written communicators. Eval applicants should be able and excited to help demonstrate concepts like sandbagging elicitation gaps in an AI misuse context.
This stream is primarily interested in mentoring projects in biosecurity that either (1) create rigorous threat models of AI biological misuse or (2) create benchmarks and tools that allow us to evaluate and mitigate these risks, as well as verifying that companies are taking suitable precautions.
Potential example projects include:
Seth is a Research scientist and the Director of AI at SecureBio, where he leads work on how advances in AI are changing biology and how to measure and mitigate biological misuse risks. His research concerns AI capability evaluation in biology and safe uses of AI at the biotechnology frontier.
n/a
Typically, this would include weekly meetings, detailed comments on drafts, and asynchronous messaging.
For threat modeling work: Skeptical mindset, transparent reasoning, analytical
For evaluations, mitigations, and verification work: LLM engineering skills (e.g., agent orchestration), biosecurity knowledge
Mentorship will be a collaboration between me and my team at GovAI. The specifics depend on the candidate and project.
I am based in Berkley -- and some of my team is based in London.
Mentor(s) will talk through project ideas with scholar