Alignment Assemblies
AI is on track to lead to profound societal shifts.
Choices that are consequential for all of us are already being made, from how and when to release models, what constitutes appropriate risk, and how to determine underlying principles for model behavior. By default, these decisions fall to a small percentage of those likely to be affected. This disconnect between high impact decisions and meaningful collective input will only grow as AI capabilities accelerate.
We believe that we can do better. Experimentation with collective intelligence processes can surface necessary information for decision-making, ensure collective accountability, and better align with human values. We are partnering with allies and collaborators from around the world to prove it. Read our blog post for more on the vision for alignment assemblies, and see our pilot processes, partnership principles, and vision for the future below. Read the results from our processes with Anthropic and OpenAI, which showed that democracy can do a good job deciding how to govern AI. And join us!
2023 Roadmap
-
Core question: What do global policymakers think about the impact of generative AI on democracy?
This pilot surfaced broad opinions from a wide set of participants, pulled from the White House’s Summit for Democracy, on the relationship between generative AI and the future of democracy. Read about this pilot in the New York Times.
-
Core question: what does the US public want to measure and mitigate when it comes to LLM risks and harms?
Partner: OpenAI
This pilot used state-of-the-art wikisurvey tools to produce a ranked list of risks that are most concerning to the US public. The outcomes of this process will be used to inform model evaluations and release criteria, standards-setting processes, and AI regulation. Read our report here.
-
Core question: How should the Ideathon, and Taiwan’s governmental policy more broadly, respond to generative AI?
Partner: Ministry of Digital Affairs, TaiwanThis pilot adapted the vTaiwan process to the question of generative AI, covering questions of copyright, due compensation, bias and discrimination, fair use, public service, and broader societal impacts. The results will directly structure the Ideathon and will be incorporated into policy over the next year.
-
Core question: What behavioral principles does the public want AI to follow?
Partner: AnthropicWe co-led a project with Anthropic to train a model o a collectively-designed constitution, co-written by a thousand representative Americans. Read about it in our blog, the Anthropic blog, or in the New York Times coverage.
-
Core question: “How should Creative Commons respond to the use of CC-licensed work in AI training?”
Partner: Creative Commons Foundation.
We worked with the Creative Commons Foundation to run an Alignment Assembly on how the foundation should respond to the use of CC-licensed work in AI training. Read the full report from Creative Commons .
Partners
Industry Partnership Principles
CIP maintains final control over the design and administration of its participation processes to ensure independence and accountability.
CIP and [Partner Organization] agree that the design, administration, and findings of participation processes will be made public and accessible.
CIP commits to take input from [Partner Organization] regarding the decisions that [Partner Organization] is seeking guidance on, and will share how this input affected process design and administration.
[Partner Organization] commits to consider the findings from relevant CIP processes, and will share how this input affected their decisions.
CIP and [Partner Organization] reaffirm that this is a non-exclusive partnership; CIP’s aim is to expand partnerships towards an eventual consortium.