Matthew Gentzel

Escalation risks from state perceptions of AI capability, AI-enabled targeting, AI-enabled decision manipulation, and the impact of AI integration into nuclear command and control. 

Stream overview

Projects related to: 

  • Preventing or mitigating large-scale nuclear risks
  • AI-related military or nuclear topics: strategic stability & targeting, non-nuclear strategic weapons, country escalation based on perception of impending AGI risks, arms control and de-escalatory AI apps, as well as AI and deception/fog of war. 
  • Incentive design for regulatory efforts
  • Information competition, credibility, and how to search for blindspots for X-risk mitigation 

Mentors

Matthew Gentzel
Longview Philanthropy
,
Nuclear Weapons Policy Program Officer
Washington, D.C.
Policy & Governance, Strategy & Forecasting

Matthew Gentzel is a Nuclear Weapons Policy Program Officer at Longview Philanthropy where he works on grantmaking and priorities research related to reducing the risk of large-scale nuclear war. Roughly half of his grantmaking budget concentrates on AI and emerging tech-related nuclear risk issues, where he investigates risks and opportunities related to AI-enabled targeting, information manipulation, and how perceptions of future AI capability impact escalation control in the near-term.

His prior work spanned emerging technology threat and policy assessment, with a particular focus on how advancements in AI may shape the future of influence operations, nuclear strategy, and cyber attacks. He has worked as a policy researcher with OpenAI, as an analyst in the US Department of Defense’s Innovation Steering Group, and as a director of research and analysis at the US National Security Commission on Artificial Intelligence. 

Mr. Gentzel holds an MA in strategic studies and international economics from Johns Hopkins School of Advanced International Studies, a BS in fire protection engineering from the University of Maryland College Park.

Mentorship style

Mentorship will mostly consist of calls, sorting through research ideas and providing feedback. I'll be up to review papers, and potentially to meet in person depending on timing. 

Scholars we are looking for

Looking for intellectually curious and honest scholars, with some background on topics related to national security, game theory, or AI-enabled military and influence capabilities. 

Can independently find collaboraters, but not required

Project selection

I'll talk through project ideas with scholar, or the scholar can pick from a list of projects

Community at MATS

MATS Research phase provides scholars with a community of peers.

During the Research phase, scholars work out of a shared office, have shared housing, and are supported by a full-time Community Manager.

Working in a community of independent researchers gives scholars easy access to future collaborators, a deeper understanding of other alignment agendas, and a social network in the alignment community.

Previous MATS cohorts included regular lightning talks, scholar-led study groups on mechanistic interpretability and linear algebra, and hackathons. Other impromptu office events included group-jailbreaking Bing chat and exchanging hundreds of anonymous compliment notes.  Scholars organized social activities outside of work, including road trips to Yosemite, visits to San Francisco, and joining ACX meetups.