Roadmap

Our path from understanding coordination failures to building self-sustaining infrastructure.

This roadmap shows our path from understanding coordination failures to building self-sustaining infrastructure for testing and designing better coordination mechanisms. Detailed specifications decrease as we move away from the current phase, because we learn as we go. The work is iterative - as we test and deploy, we might discover insights that require revisiting earlier frameworks. We start with foundations to help us establish the right frame of mind. The next phase is to develop core infrastructure to prove the approach works - deep competence in one domain beats shallow coverage across many. The validation and scaling phase pushes for bridging the theory practice gap while building a large dataset of collective intelligence safety. We use this collected data to generate new novel forms of collective intelligence. Each phase provides immediate practical value while building capabilities for the next.

David Hume

Scottish Enlightenment philosopher who pioneered empirical approaches to understanding human nature, causation, and the foundations of knowledge.

Phase 1: Hume

Foundation: Understand Multi-Scale Coordination

The first phase focuses on exploration: What exactly are these multi-scale collective intelligence problems? How should we even think about them? What frameworks from different fields (economics, political science, complexity science) might help? We focus on exploratory research to find the right frames and map the problem space.

This foundational phase explores three interconnected research areas. Mathematical Language develops the formal framework for reasoning about coordination systems—treating markets, networks, and democracies as operations on the same underlying structure. Taxonomy of Agents classifies what types of agents exist and how agency emerges at different scales. Problem Prioritization maps the landscape of coordination failures and identifies which problems are most urgent as AI systems become more capable. Together, these areas provide the conceptual foundation needed to rigorously study collective intelligence.

Mathematical Language for Coordination

A formal framework for representing markets, networks, and democracies as operations on the same underlying structure. We seek a unified language where coordination mechanisms can be composed and analyzed systematically.

Graphs as Universal Language

draft·Research Paper

A Spectral Theory of Collective Intelligence

active·Research Paper

Category Theory Connections

concept·Research Paper

Taxonomy of Agents

A classification system for understanding what properties of agents matter for collective intelligence and how agency emerges at different scales—from individuals to institutions.

System Level Safety Evaluations

published·Blog Post

A Phylogeny of Agents

published·Blog Post

Problem Prioritization

A map of coordination failures and their urgency as AI systems become more capable. Which problems are most critical to solve before transformative AI arrives?