Alignment Assemblies

AI is on track to lead to profound societal shifts.

Choices that are consequential for all of us are already being made, from how and when to release models, what constitutes appropriate risk, and how to determine underlying principles for model behavior. By default, these decisions fall to a small percentage of those likely to be affected. This disconnect between high impact decisions and meaningful collective input will only grow as AI capabilities accelerate.

We believe that we can do better. Experimentation with collective intelligence processes can surface necessary information for decision-making, ensure collective accountability, and better align with human values. We are partnering with allies and collaborators from around the world to prove it. Read our blog post for more on the vision for alignment assemblies, and see our pilot processes, partnership principles, and vision for the future below. Read the results from our processes with Anthropic and OpenAI, which showed that democracy can do a good job deciding how to govern AI. And join us!

2023 Roadmap

Partners