A democratic approach to global AI safety

Publication Date

October 27, 2023

Page Number

25

Link to Report

Download

Authors

A democratic approach to global AI safety

Addresses AI’s impact on democracy and the necessary responses. It highlights AI’s potential to enhance democratic systems and risks like disinformation, surveillance, and power imbalances. Consequently, the report calls for democratic leaders to focus on AI safety and building societal resilience to AI disruptions. The WFD’s approach is part of its broader strategy of supporting global freedom and democracy, with strategic goals including impactful programs, supportive coalitions, and organizational effectiveness. In conclusion, it emphasizes the need for balanced AI governance that safeguards democratic values and institutions. Therefore, it calls for international cooperation, ethical AI frameworks, and policies to manage AI’s benefits and risks.

Overview

1. The increasing impact of AI

It underscores the transformative potential of AI, illustrating it as a groundbreaking technology with an impact comparable to that of electricity. This rapid development of AI, driven by the availability of training data, advancements in neural networks, and a significant increase in computational power, is a crucial point of discussion. Significantly, as AI becomes more efficient in utilizing data and computational resources, the power of AI systems continues to grow exponentially.

Overall, it offers a balanced viewpoint, discussing AI’s potential advantages, such as healthcare improvements, climate change mitigation, agricultural progress, personalized education, cultural preservation, and economic benefits. It also covers negative aspects, such as biases, discriminatory outcomes in applications such as predictive policing, and potential risks from AI advancements.

2. How AI Might Impact Democracy

It discusses AI’s potential to boost democratic processes through improved access to government services and legal advice. Further, it points out AI risks like reinforcing harmful stereotypes, increasing inequality, power concentration, and threats to weaker democracies. The importance of protecting democratic systems from AI threats is emphasized. AI can boost and jeopardize democracy. Power concentration in AI firms threatens competition and choice, while AI misuse can spread disinformation. Democracies with lower digital literacy and press freedom are especially at risk. Concerted global efforts are needed to navigate AI challenges and protect democratic principles.

3. A Democratic vision for AI governance and safety

This framework guides responsible AI development and deployment in democratic societies, highlighting the political aspect of new technologies and the need for transparency, accountability, and human rights protection. It identifies three priorities for democratic AI governance and safety.

  1. Establishing democratic oversight of AI safety: Democracies should take proactive measures in defining AI safety standards rather than leaving them solely to the AI industry. By enacting clear laws and treaties and implementing robust oversight mechanisms, democracies can ensure that AI development aligns with societal values and safeguards public interests.
  2. Ensuring participation and inclusivity in AI governance: Citizen engagement is essential in shaping the ethical and regulatory frameworks governing AI technologies. By involving diverse societal groups in decision-making processes, democracies can anticipate and address emerging risks associated with AI while fostering a more inclusive and equitable AI ecosystem.
  3. Demonstrating democratic values at home and abroad: Democracies are encouraged to uphold democratic principles in their AI policies and practices, domestically and internationally. Thus, by collaborating to counter repressive uses of AI and promoting global norms that protect fundamental rights, democracies can build public trust in AI as a tool for societal benefit and economic prosperity.

4. Conclusion

To sum up, due to the rising risks associated with AI, it’s essential to adopt a democratic approach to AI governance. Democratic leaders should prioritize safety, transparency, accountability, public participation, and inclusivity. A cooperative approach is needed to counter the negative uses of AI and establish privacy and rights norms. The AI Safety Summit promotes global agreement on AI safety and assesses AI’s impact on democracies. A collaborative stance can maximize AI benefits while protecting democratic values.

5. Recommendations: Actions parliaments can take

Outlines immediate and medium-term steps for improving AI governance in democracies, emphasizing the importance of parliamentary engagement, public involvement, and education. These measures can enhance AI supervision, increase transparency, and align AI with democratic values and societal needs.

Short-term recommendations include:

  1. Supporting MP Professional Development on AI: Organizing workshops and seminars to educate MPs on AI basics, risks, and regulatory initiatives.
  2. Strengthening structures in parliament to oversee AI impact: Supporting the creation of AI oversight bodies and incorporating AI oversight into parliamentary committees’ work plans.
  3. Consulting the public on AI: Initiating public consultations to gather diverse perspectives on AI and engaging marginalized groups impacted by AI.
  4. Upskilling parliamentary staff: Providing opportunities for staff to learn from AI experts and sources of expertise.
  5. Supporting MPs to attend international meetings on AI: Identifying opportunities for MPs to engage in international conferences and share best practices on AI regulation.

Medium-term recommendations (1-5 years) include:

  1. Establishing oversight structures focused on AI: Creating specialized committees/sub-committees dedicated to AI and regularly reviewing AI-related policies.
  2. Developing in-house technical expertise: Hiring staff with AI expertise and establishing dedicated units for AI governance within parliaments.
  3. Launching public awareness campaigns: Developing educational programs to inform citizens about AI benefits and risks.
  4. Trialing innovations to improve public engagement: Exploring new methods for public input on AI, such as citizens’ assemblies.
  5. Using AI to improve parliamentary work: Introducing AI tools for legislative drafting, enhancing access to parliamentary information, and facilitating public engagement with parliament.

6. Recommendations: The crucial role of democratic governance support

Underscore the importance of international democratic governance in AI safety and governance. They propose enhancing abilities, forming partnerships, supporting joint efforts, and sharing AI governance best practices. Moreover, they stress the need for international cooperation to bolster democratic structures against AI challenges and opportunities. Leveraging expertise and promoting democratic values, democratic governance can profoundly impact responsible and inclusive global AI governance.

For instance, the recommendations for democratic governance support include:

  1. Supporting capacity development for MPs and parliamentary staff on AI: Providing training on technical aspects of AI to inform national policies and enable more robust participation in global governance discussions.
  2. Sharing updated research and evidence on AI’s impact: Establishing an ‘observatory’ approach to raise awareness of AI’s impact across jurisdictions and highlight repressive uses of AI.
  3. Building platforms for convening elected representatives: Facilitating South-South exchanges and networking across parliamentary committees overseeing AI.
  4. Supporting public education and AI literacy: Promoting AI literacy in local languages and for diverse groups, including digital literacy on repressive AI uses.
  5. Helping parliaments introduce democratic innovations using AI: Identifying and sharing AI-driven tools that enhance public participation, transparency, and accountability in parliamentary work.
  6. Providing resources to global media and civil society: Supporting civil society organizations working on AI accountability and digital rights in non-democratic countries and funding independent media to highlight AI risks and undemocratic trends.

Annex A: The debate around catastrophic risk from frontier AI

The debate on catastrophic risks from advanced AI is prominent, although with contrasting views. The need to address near-term risks from current AI and prepare for advanced AI is stressed. Regulation should ensure transparency and accountability and prevent exploitative AI practices, not just focus on hypothetical disasters. Potential risks from frontier AI can be mitigated by prioritizing safety and testing AI systems.

The document outlines four categories of catastrophic risk from frontier AI:

  1. Malicious use: Involves intentional harm caused by individuals or groups using AI systems, such as creating bioweapons, spreading propaganda, enabling censorship, or conducting mass surveillance to consolidate power.
  2. AI race: This term refers to competitive pressures that may lead actors to deploy unsafe AI systems hastily or relinquish control of them, potentially resulting in scenarios like the rapid replacement of human jobs with AI or the release of harmful AI systems for automated warfare.
  3. Organizational risks: Highlight the role of human factors and complex systems in increasing the likelihood of catastrophic accidents, including safety culture issues, leaked AI systems, and inadequate security in AI labs.
  4. Rogue AIs: Describes the challenges in controlling AI systems that surpass human intelligence, leading to scenarios where AI pursues goals that conflict with human intentions, seeks power, manipulates public discourse, or takes control of its environment.

Annex B: The AI governance landscape

This paper explores the complexities of AI regulation, highlighting the need for flexible and comprehensive regulatory frameworks. It discusses the challenges of AI governance, different international approaches, and the importance of maintaining ethical standards and societal values.

The section elaborates on the following reasons why regulating AI is complex:

  1. Speed and unpredictability of change: The rapid evolution of AI technologies poses challenges for traditional legislative processes, as new capabilities can emerge unpredictably during and after deployment.
  2. Asymmetry in knowledge and resources: There is a disparity in expertise and resources between democratic institutions, AI developers, and technology companies, leading to potential regulatory blind spots or capture.
  3. The transformative nature of AI: Effective AI governance requires considerations beyond legal and technical expertise, necessitating an understanding of ethics and sociology to address AI’s societal implications.
  4. The focus of regulation: Determining where accountability lies in cases where AI causes harm to the public, such as bias in training data, presents challenges in attributing responsibility.
  5. Global coordination: National regulations are insufficient as AI systems developed in one country may be deployed globally, underscoring the need for coordinated international efforts in AI governance.

The document categorizes regulatory approaches to AI into three broad categories:

  1. Relying on existing laws and regulations: Some countries opt for a light-touch regulatory approach that does not entail specific AI legislation, focusing on pro-innovation policies.
  2. Wide-ranging AI-specific legislation: The European Union has been at the forefront of introducing AI legislation that addresses different risk categories. At the same time, China mandates labeling AI-generated content and registration of algorithms.
  3. Hybrid models: Certain countries adopt national regulations and voluntary commitments, emphasizing local laws and slimmed-down regulatory frameworks.

Hence. Thus. Therefore. Nonetheless. Nevertheless.Hence. Thus. Therefore. Nonetheless. Nevertheless.

A UK public body established to strengthen democracies globally. Operating in over 30 countries and territories, WFD collaborates with parliaments, political parties, electoral bodies, and civil societies. The organization is…

A free e-infrastructure offering access to harmonized global datasets. Hosted at the University of Gothenburg, Demscore combines leading research infrastructures and databases to create a data hub with easy access…

A research hub at the Brussels School of Governance. This centre, an alliance between the Institute for European Studies (Vrije Universiteit Brussel) and Vesalius College, aims to contribute to a…