No results

Try adjusting filter and search criteria.

Nora Ammann

In this stream, I’m interested in developing concrete, actionable R&D agendas for post-AGI institutions and AI resilience. For post-AGI institutions, I’m especially interested in what infrastructure would be needed to make super-cooperative AGI or “Coasean bargaining at scale” possible. For AI resilience, I’m interested in follow-on work to airesilience.net that moves from high-level motivation to detailed proposals.

Stream overview

I’m open to fellows proposing concrete projects within either of two broad directions. I am especially interested in projects that turn high-level theses into tractable R&D agendas, prototypes, institutions, or field-building bets.

Direction 1: Cooperative AGI

The central question is: what would it take for humans and advanced AI systems to form stable, Pareto-improving coalitions, rather than ending up in unilateral domination, competitive erosion, or epistemic collapse?

AGI could radically lower the transaction costs of cooperation: better world models, better bargaining, better contract execution, better collective reflection. But it could also raise them: scalable manipulation, epistemic flooding, eroded ground truth, and AI systems whose moral reasoning is unstable, shallow, or strategically distorted.

I am especially (but not exclusively interested in working on)

  • Moral reasoning for advanced AI:wWhat factors, beyond of model training alone, affect how an advanced AI system reasons about moral and strategic questions, and whether & how that reasoning coverges, or remains instable? Potential factors include beliefs about the world and itself, its material conditions and affordances for interacting with the world and other agents, and the socio-econoimc, epistemic and institutional pressures acting on it.
  • Structured transparency and trust infrastructure: operationalising an concrete R&D effort for technologies and institutions that let actors share, verify, and act on sensitive information without creating unacceptable privacy and misuse risks (see e.g. https://www.governance.ai/research-paper/beyond-privacy-trade-offs-with-structured-transparency)

Direction 2: AI Resilience.

Follow-on work to https://airesilience.net/, building civilisation’s ability to withstand, adapt to, and recover from AI-driven shocks. This could include:

  • catalyzing progress a specific, high-impact resilience capability, 
  • extending or refining a specific part of the resilience tech tree, 
  • strengthening the AI resilience research & builder community through an opinionated field-building bet.

Mentors

No items found.

Mentorship style

60-minute weekly 1:1s + async written exchange (drafts, slack, etc.). Ad-hoc additional meetings as needed. The fellow is responsible for driving the project; my role is high-level steering, strategic input and sound boarding.

Fellows we are looking for

I am open to working with builder/entrepreneur, strategist or researcher types. 

Essential:

  • Drive and self-direction. You can take an under-specified question and run with it, with high-level steering rather than detailed supervision.
  • Demonstrated ability to make real progress on hard, ill-defined problems (a substantial self-directed project, agenda, report, first-author paper, or built-and-shipped thing).
  • Strong written reasoning.
  • Background relevant to one or both directions: e.g. political/institutional economics, game theory/mechanism design, various areas of engineering, security or operational/field-building work.

Not a good fit:

  • Fellows looking for a tightly scoped technical ML project.
  • Those who want close hands-on supervision; you need to drive your own work between check-ins.

Project selection

Fellows are welcome to propose one or several concrete projects within the above directions. We will work together to refine the project, pressure-test its theory of change, and scope it into something tractable. The goal is to have converged on a concrete project within the first two weeks.