This track is for prospective founders, field-builders, high-agency generalists, and general managers who want to launch new AI safety initiatives. Over 10 weeks (with a potential 6-12 month extension), fellows build and launch something new with weekly guidance from mentors who have done it themselves: founders, field-builders, sitting CEOs, and program directors.
We think AI safety needs to scale fast to ensure AI development and deployment goes well. At MATS, we are proud to have trained 10 cohorts of top researchers, but high-impact organizations are increasingly bottlenecked by management and scaling. There are more promising ideas than there are “general managers” to champion them, and available funding and capital are outpacing the capacity for deployment. We are launching the MATS Founding & Field-Building track to position to solve these bottlenecks and scale AI safety.
We want fellows who are driven, resourceful, and principled; who are motivated to do work that matters even when no-one is looking. There are three main pathways in this track:
If you see yourself in any of these roles, we want you to apply!
Potential project inspiration from: Coefficient Giving 2025 Technical AI Safety RFP, Lizka Vaintrob & Owen Cotton-Barratt, Eric Ho, Marius Hobbhahn, Julian Hazell, Abbey Chaver, and Asya Bergal.
Founding ambitious AI safety and field-building projects.
Minimum support = 2x 30-min meetings per week. We could scale this up as appropriate.
I'll be based in SF. If the fellows want to work in BlueDot's office for some periods of time, I could collaborate with them daily.
I'm available for quick calls anytime, and am responsive on Slack.
I'm open to people with a wide range of backgrounds. Though you need to be willing to work very hard, be great at communicating, and have a burning desire to make AI go well.
I work best with people who are intense, communicate and reason clearly, and are mission-driven.
We'll work together to design the project. You'll have a lot of freedom to figure out what the best shape of thing to do is, and I'll provide lots of regular feedback and make relevant introductions to help you refine the proposal.
Your first 1-2 weeks will be focused on figuring out what to do, and the rest of the fellowship will be focused on execution.
Backing projects focused on product development and organization building in the areas of AI safety and alignment, biosecurity, and critical cybersecurity. Looking for fellows who are self starters, default to action, and have a desire to create.
Scheduled 45 min bi-weekly meetings (every other week). Ad hoc meetings can be added between scheduled sessions. We’ll have a shared Slack channel with all 3 partners: Mike, Nick, and Charlie as well as supporting team at Halcyon. Ping us anytime.
For product development and organization building projects; strong technical hardskills (e.g., MLE, SWE, math), domain expertise in AI safety and alignment, biosecurity, or cybersecurity. Experience with building or working on a usable product. Openness to iterating, pivoting, and failure. General understanding of business and organization principles. Considerate and works well in team settings.
For generalist projects: 5+ years of relevant experience: recruiting, talent / HR, executive search, or general business operations with a strong people component. Familiarity with the broader startup / early-stage ecosystem. A major plus would be familiarity with at least one of Halcyon's focus sectors (AI safety / frontier AI, biosecurity, cybersecurity): enough to read profiles credibly and have substantive conversations with portfolio companies. Strong organizational instincts. Excellent written and verbal communication. Experience working with CRM technologies as well as basic AI tools: Claude Cowork or similar.
For product development and organization building projects in the areas of AI safety and alignment, biosecurity, and critical cybersecurity - fellows will have full freedom. We expect fellows to come with rough ideas and opinions on direction that will inform where they start exploring the market. We don’t expect refined ideas or pitches. We do expect building.
For field building and generalist fellows, we are prioritizing a talent matching project. This includes processing thousands of individuals in our CRM, and finding how they may pair with our portfolio companies and other areas of high priority in our network.
Eleos AI Research is a nonprofit working to ensure the interests of AI systems are appropriately taken into account as we navigate transformative AI. We are looking for competent generalists to help scale up our AI welfare field-building by running high-impact events, or to otherwise amplify our research, communications, and operational capacity.
By default, I expect to meet with each fellow for at least 60 minutes per week (possibly split into 2 x 30 minute meetings). For projects that require a lot of input (e.g. event organizing), we can scale that up as needed. I’ll also be available for ad hoc meetings, and can be reached asynchronously on Slack or by email. At Eleos, we each post a daily update in Slack; I would like fellows to do the same.
The ideal candidate:
We'll meet at the start of the program to discuss ideas in the areas listed above, as well as ideas that fellows would like to pitch. We’ll jointly decide on a project that aligns with Eleos’s priorities as well as the goals and skills of the fellow.
Forecasting, real world applications of world modeling with LLMs, formal & semiformal verification, capability evals design, coordination mechanisms, collective intelligence applications, or gradual disempowerment.
Once a week 30min-2hour call depending on the needs, meeting in person if in Bay Area/London. Very responsive over Slack.
Interest in moving from theory to applied work, especially on coordination, negotiation and forecasting. If there are any resources that Metaculus and/or AI Objectives Institute can provide with real world applications that would be a great fit. Formal verification and specification knowledge would be a great match but not required. I am not personally interested in pursuing paper publications, but if the fellow is interested in this, I am happy to support them.
I am more interested in the fellow being passionate and excited about the project direction, and for me to believe that I am the right mentor to support them with my knowledge base.
The MATS Program is a 10-week research fellowship designed to train and support emerging researchers working on AI alignment, transparency and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.
MATS mentors are leading researchers from a broad range of AI safety, alignment, governance, field-building and security domains. They include academics, industry researchers, and independent experts who guide scholars through research projects, provide feedback, and help shape each scholar’s growth as a researcher. The mentors represent expertise in areas such as:
Key dates
Application:
The main program will then run from September 28th to December 4th, with the extension phase for accepted fellows beginning in December.
MATS accepts applicants from diverse academic and professional backgrounds - from machine learning, mathematics, and computer science to policy, economics, physics, cognitive science, biology, and public health, as well as founders, operators, and field-builders without traditional research backgrounds. The primary requirements are strong motivation to contribute to AI safety and evidence of technical aptitude, research potential, or relevant operational experience. Prior AI safety experience is helpful but not required.
Applicants submit a general application, applying to various tracks (Empirical, Theory, Strategy & Forecasting, Policy & Governance, Systems Security, Biosecurity, Founding & Field-Building.
In stage 2, applicants apply to streams within those tracks as well as completing track specific evaluations.
After a centralized review period, applicants who are advanced will then undergo additional evaluations depending on the preferences of the streams they've applied to before doing final interviews and receiving offers.
For more information on how to get into MATS, please look at this page.