Designing New Forms Of Collective Intelligence
Small changes in how groups coordinate create dramatically different outcomes. We're exploring the vast design space of possible coordination mechanisms.
Designing New Forms Of Collective Intelligence
Small changes in how groups coordinate create dramatically different outcomes. We're exploring the vast design space of possible coordination mechanisms.
Research
Our thinking on collective intelligence
Who This Matters To
Policymakers
We can help you see and quantify the potential impact of your governance proposals on wide populations before implementation
Example
Semiconductor supply chain policies often create hidden vulnerabilities when critical nodes concentrate in adversarial regions. We're building models to reveal which interventions actually increase resilience versus which just look good on paper.
AI Safety Researchers
We can help you evaluate emergent risks and the safety of multi-agent AI systems
Example
We're building mathematical frameworks to predict where multi-agent AI systems break down and identify the control levers needed to prevent those breakdowns. Our approach combines phase transition analysis - mapping the critical points where collective AI behavior shifts from beneficial to harmful - with top-down control theory that reveals which interventions can steer these systems toward pro-social outcomes before problems emerge.
AI Labs
Predict emergent behaviors before expensive deployment
Example
AI systems from many organizations will need to coordinate someday. Should they trade information like a market, vote like democracies, or try some other type of coordination mechanism? We are designing simulations to tell you which approach will work best before you spend millions building systems that end up with coordination failures.