
Independent
—
Independent researcher
Over the past four years, Steven has led various safety- and governance-related work at OpenAI, most recently leading their research into personhood credentials—what are privacy-preserving ways to tell who is real online. Previously he led his team's evaluations for dangerous capabilities, and for AI systems' ability to do useful AI R&D (many of his team's evals are open-sourced here: https://github.com/openai/evals/tree/main/evals/elsuite). Steven is interested in research on both applied AI systems (e.g., drawing upon his experience of what it takes to do useful monitoring in-practice) as well as on more conceptual topics (e.g., considering what are important principles of agent safety as the world progresses toward AGI).
These days, his research interests broadly focus on the questions of "What abilities/propensities might be dangerous for an AI system to have?", "How can we tell if AI systems have these?", and "What mitigations/governance mechanisms might be effective for reducing these risks while preserving the upsides?". In particular, Steven is interested in exploring novel technical governance mechanisms, which might allow governments or coalitions of companies to ensure that all labs achieve a high safety standard - for instance, if chain-of-thought-monitoring could be standardized as a protocol.
Before joining OpenAI, Steven was the Chief of Staff and a research/programs manager at the Partnership on AI. Once upon a time, he was also a management consultant. Steven have an MS in Computer Science (Machine Learning focus) from Georgia Tech and an AB in Economics from Brown.