About
We're a collective intelligence research organization working on how groups coordinate and make better decisions.
We build new systems - voting mechanisms, organizational structures, coordination tools - while developing the theory to understand them. And we use the same principles in our own research process.
Philosophy & Culture
The principles and practices that shape how we work
We partner with existing organizations.
The theory develops from what we're seeing in practice.
We engage in interdisciplinary dialogue.
We've structured ourselves as both a focused research team and a broader network to make interdisciplinary collisions more likely, creating more surface area for useful accidents.
We assume someone already solved our problem, just in a different field.
We read across disciplines - game theory, cybernetics, information theory, organizational behavior - looking for patterns that repeat across contexts.
We test things on ourselves first.
When we discover something about how groups work better, we implement it in our own processes before suggesting it elsewhere. This has changed how we run meetings, make decisions, and share information.
We take play seriously.
Mammals evolved it as a learning mechanism. Time for exploration alongside execution, permission to pursue unexpected tangents, explicit switching between modes.
Make the implicit explicit.
We make things explicit that usually stay implicit - what good work means here, how we make decisions, what we do when we disagree. Building shared language as we go.
Team
The people building Equilibria Network

Aaron Halpern
Strategy and Ecosystem Lead
Complexity scientist and systems design enthusiast bridging diverse disciplines to solve collective intelligence challenges. PhD from University College London on the origin of genetic coding with expertise in cultural evolution and practical problem reframing.
Click to read more
Aaron Halpern
Aaron is a complexity scientist and systems design enthusiast. After completing his PhD on the origin of genetic coding at University College London, Aaron worked across diverse, fast-moving roles consulting on emerging technology trends and on projects in pre-seed venture capital, leading him to meet the Equilibria Network founding team. His interests span cultural evolution, collective intelligence, metascience, and using the phrase "I was just listening to a podcast about..." Aaron is at his best when helping a team reframe problems, finding practical creative solutions, and sharing insights from across disciplines.
Click to return

Jonas Hallgren
Research and Operations Lead
Developing multi-agent coordination systems that remain stable and aligned even under competitive pressures.
Click to read more
Jonas Hallgren
Jonas directs our research architecture and operational systems, drawing from four years of dedicated experience in AI safety research with special focus on multi-agent coordination through active inference frameworks. His technical expertise was refined as Chief Scientific Officer at a Collective Intelligence startup where he developed algorithms for AI agent coordination.
His contributions to AI safety education include curriculum design during his internship at SERI MATS and creating structured research programs like the Distillation for Alignment Practicum and the Alignment Mapping Program. Jonas co-founded AI Safety Sweden (now AI Safety Collab) and helped organize Future Forum, bringing together over 300 participants including industry leaders like Daniella Amodei and Sam Altman.
At Equilibria Network, Jonas designs research architectures that balance theoretical exploration with practical implementation, creating structured environments where complex coordination challenges can be systematically addressed.
Click to return

Markov Grey
Technology and Communication Lead
Translating complex technical concepts across domains to build resilient coordination systems.
Click to read more
Markov Grey
Markov leads technical operations and communication strategy. He has worked in research and head of communication at the Center for AI Safety in France. He is the first author of one of the first textbooks on AI safety, which is used by hundreds of students and several universities and AI safey programs including Sorbonne, UBC Vancouver, and École normale supérieure.
He has 7+ years of technical work experience as a technical generalist working in cyber security, full-stack software development, smart contract development, and 2+ years of that in AI safety.
His communication experience includes writing AI safety scripts at Rational Animations, serving as a research distillation fellow at AI Safety Info, and researching AI threat models at AI Safety Camp.
Markov's current research focuses on combining DAOs with AI safety to proposed new decentralized AI governance models, and creating frameworks that make collective intelligence concepts accessible across different stakeholder groups.
Click to return