Halcyon

Backing projects focused on product development and organization building in the areas of AI safety and alignment, biosecurity, and critical cybersecurity. Looking for fellows who are self starters, default to action, and have a desire to create.

Stream overview

Product development and organization building projects in the areas of AI safety and alignment, biosecurity, and critical cybersecurity. Some examples and reference below:

AI safety and alignment: Technologies for interpretability, scalable human oversight, and AI control and governance.

Mechanistic interpretability

Scalable oversight

AI control

RL and training for safety

Open-weight model safety

AI consciousness and moral status

Biosecurity & Pandemic Prevention: Tech to ensure AI models cannot be used to design dangerous biological agents, genetic material screening, early detection systems, better PPE.

Metagenomic surveillance

DNA synthesis screening

Broad-spectrum medical countermeasures

AI model biosafety evaluations

Environmental deactivation (Far-UVC, filtration)

DIY bio-hardening

Critical AI Cybersecurity: Solutions to novel AI cyber attacks, and securing critical infrastructure and frontier models.

AI-native cyber defense

Critical infrastructure hardening

AI agent and multi-agent security

Hyper-secure AI compute

Securing AI model weights

Open source intelligence and monitoring of large compute clusters

Additionally, we are considering projects that amplify Halcyon’s efforts in field building:

Field-Building & Infrastructure: Talent pipelines into AI safety, recruiting infrastructure, and public communications.

Talent pipelines into AI safety

Recruiting infrastructure

Social listening for network building

Mentors

Mike McCormick
Halcyon Futures
,
CEO
SF Bay Area
No items found.

Mike has spent his career in startups and venture capital. Prior to founding Halcyon, he was a partner at GPV, a VC firm managing more than $1 billion in capital. He was an early investor in several unicorn companies.

Since 2022, Mike has been focused on grants, investments and incubating new projects in AI security and global resilience.

Read more
Nick Raushenbush
Halcyon Futures
,
Operating Partner
SF Bay Area
No items found.

Operating Partner at Halcyon. Cofounded and scaled a software company, Shogun, to Series C: $115M raised, $575M valuation (Y Combinator, Initialized, Accel, Insight) with over 20,000 customers and 200 team members. Investor in over 50 startups and early stage funds. 

Read more

Co-mentorship from Halcyon Partners Mike McCormick (Founder, CEO), Charlie Petty (Venture Partner) and Nick Raushenbush (Operating Partner). Nick will be leading point.

Mentorship style

Scheduled 45 min bi-weekly meetings (every other week). Ad hoc meetings can be added between scheduled sessions. We’ll have a shared Slack channel with all 3 partners: Mike, Nick, and Charlie as well as supporting team at Halcyon. Ping us anytime.

Fellows we are looking for

For product development and organization building projects; strong technical hardskills (e.g., MLE, SWE, math), domain expertise in AI safety and alignment, biosecurity, or cybersecurity. Experience with building or working on a usable product. Openness to iterating, pivoting, and failure. General understanding of business and organization principles. Considerate and works well in team settings.

For generalist projects: 5+ years of relevant experience: recruiting, talent / HR, executive search, or general business operations with a strong people component. Familiarity with the broader startup / early-stage ecosystem. A major plus would be familiarity with at least one of Halcyon's focus sectors (AI safety / frontier AI, biosecurity, cybersecurity): enough to read profiles credibly and have substantive conversations with portfolio companies. Strong organizational instincts. Excellent written and verbal communication. Experience working with CRM technologies as well as basic AI tools: Claude Cowork or similar.

Project selection

For product development and organization building projects in the areas of AI safety and alignment, biosecurity, and critical cybersecurity - fellows will have full freedom. We expect fellows to come with rough ideas and opinions on direction that will inform where they start exploring the market. We don’t expect refined ideas or pitches. We do expect building.

For field building and generalist fellows, we are prioritizing a talent matching project. This includes processing thousands of individuals in our CRM, and finding how they may pair with our portfolio companies and other areas of high priority in our network.