The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks

Publication Date

August 13, 2024

Page Number

79

Link to Report

Download

Authors

The AI Risk Repository

Exhaustively analyzes the risks associated with artificial intelligence (AI). It aims to bridge the gap in understanding AI risks by creating a comprehensive and accessible database that is a common reference point for researchers, policymakers, and industry stakeholders.

Key Highlights

  1. Comprehensive Database:
    • Catalogs 777 risks identified from 43 existing taxonomies, providing a detailed and organized database of AI-related risks. This repository is designed to be a living document that is continuously updated to reflect new findings and trends in AI risk research.
  2. Taxonomy of Risks:
    • Introduces two main taxonomies for classifying AI risks:
      • Causal Taxonomy: Categorizes risks based on causal factors such as the responsible entity (human or AI), intentionality (intentional or unintentional), and timing (pre-deployment or post-deployment).
      • Domain Taxonomy: This taxonomy groups risks into seven domains, including discrimination and toxicity, privacy and security, misinformation, malicious actors and misuse, human-computer interaction, socioeconomic and environmental impacts, and AI system safety and limitations. These are further divided into 23 subdomains.
  3. Identification of Gaps:
    • Reveals significant gaps in existing AI risk frameworks, with many frameworks covering only a fraction of the identified risk subdomains. This highlights the need for a more coordinated approach to understanding and managing AI risks.
  4. Resource for Stakeholders:
    • A tool for various stakeholders, including developers, researchers, policymakers, and enterprises. It provides a foundation for assessing risk exposure and developing mitigation strategies tailored to specific contexts.
  5. Future Directions:
    • Emphasizes the importance of ongoing updates and expert consultations to refine the repository. Future phases will focus on identifying omissions, adding new risks, and providing targeted insights for different types of users.

Overview

Methods

Systematic Literature Search

Conducted a comprehensive literature search to identify relevant AI risk studies. They utilized multiple academic databases and established clear inclusion criteria to ensure a thorough and unbiased review. This systematic approach helped gather a wide range of high-quality documents for analysis, forming a solid foundation for the AI Risk Repository.

Search Strategy

Employed a comprehensive search strategy to find relevant literature on AI risks. They used keyword searches, Boolean operators, and specific AI risk-related terms. Additionally, they performed forward and backward citation searches to identify recent studies citing key papers and explore their references. This thorough approach ensured a wide-ranging overview of existing AI risk frameworks and classifications.

Extraction into Living AI Risk Database

Created a dynamic “living” AI Risk Database from the identified literature. This database captures vital details for each risk, including title, author, year, source, and risk category. A structured template, refined through pilot testing, ensured accurate data extraction. This process guarantees that the database faithfully represents the risks described in the original studies while allowing for continuous updates as new research emerges.

Best Fit Framework Synthesis Approach

Employed a best-fit framework synthesis approach, combining framework and thematic synthesis methods to develop two AI risk taxonomies. This approach allowed them to build on existing frameworks while accommodating new risks. They tested various frameworks for coding extracted risks, adapting them as needed by updating categories or altering structures to ensure comprehensive risk classification.

Development of High-Level Causal Taxonomy of AI Risks

Developed the high-level Causal Taxonomy by identifying and categorizing risks based on their root causes. They analyzed extracted risks to find common themes, organizing these into a structured framework. This taxonomy illustrates how various risks interconnect and what factors lead to their occurrence. Focusing on causal relationships offers insights for developing targeted interventions and risk mitigation strategies.

Development of Mid-Level Domain Taxonomy of AI Risks

Developed a Domain Taxonomy, categorizing AI risks. This approach provides context-specific insights, showing how risks vary across AI applications. It enables targeted risk management strategies for different industries.

Results

Systematic Literature Search

The authors conducted a rigorous systematic literature search on AI risks. They queried multiple academic databases using predefined criteria, ensuring a comprehensive and reproducible process. This approach yielded 17,288 unique articles, which were then screened for eligibility. The systematic method bolsters the credibility of the findings and ensures the AI Risk Repository encompasses a wide range of research on AI risks.

Characteristics of Included Documents

The authors analyze the characteristics of the 43 documents that were ultimately included in the review. They provide insights into various aspects, such as:

  • Publication Types: The documents include peer-reviewed articles, conference papers, preprints, and industry reports showcasing the interdisciplinary nature of AI risk research.
  • Publication Years: Most documents were published recently, particularly after 2020, indicating increased interest and research activity in AI risks.
  • Focus Areas: The authors categorize the documents based on the specific AI risks they address, revealing trends in research focus and identifying areas that may require further exploration.

Most Common Causal Factors for AI Risks

Highlights several key causal factors contributing to AI risks. These include algorithmic bias, which can lead to unfair outcomes due to biased training data or flawed algorithms; data quality issues that impact the performance and reliability of AI systems; and lack of transparency in AI decision-making processes, which hinders accountability and trust. The analysis provides valuable insights for developing targeted intervention strategies and informing risk mitigation efforts in AI development and deployment by identifying these critical areas.

Causal Factors of AI Risk Examined by Included Documents

Delves into the specific causal factors examined across the included documents. They provide a detailed overview of how different studies addressed these factors, highlighting variations in focus and methodology. This examination reveals the diversity of approaches taken by researchers. It underscores the complexity of AI risks, as studies may emphasize different causal factors based on their context and objectives.

Most Common Domains of AI Risks

The authors identify and discuss the most frequently mentioned domains of AI risks across the included documents. They highlight which domains received the most attention in the literature, such as AI System Safety, Failures and limitations, and Socioeconomic and environmental Harms. This analysis reveals trends in research focus and indicates areas where risks are perceived to be more pressing or relevant.

Domains of AI Risks Examined by Included Documents

Provides an overview of the specific domains examined in the included documents. The authors analyze the focus distribution across different domains, highlighting which areas were more thoroughly explored and which may have been underrepresented in the literature. This examination helps to identify gaps in research and areas that may require further investigation.

Subdomains of AI Risks Examined by Included Documents

Delves into the subdomains of AI risks, providing insights into the specific categories within each domain addressed in the literature. They discuss the prevalence of subdomains and highlight those that received less attention. This analysis helps to illuminate the nuances of AI risks and the varying levels of exploration across different subcategories.

Combining the Causal and Domain Taxonomies

Explores the interplay between the Causal and Domain Taxonomies, examining how causal factors relate to specific risk domains. They discuss the benefits of combining these taxonomies to provide a more comprehensive understanding of AI risks. This integrated approach allows for a deeper analysis of how different causal factors contribute to risks across various domains, enhancing the overall framework for understanding AI risks.

Most Common Causal Factors for Each Domain of AI Risks

Analyzes the most common causal factors associated with each domain of AI risks. They highlight how certain causal factors may be more prevalent in specific domains, providing insights into AI systems’ unique challenges in those contexts. This analysis is crucial for developing targeted risk management strategies that address each domain’s relevant causal factors.

Entity x Intent Causal Factors by Each Domain of AI Risk

Examines the relationship between entities (e.g., individuals, organizations) and their intents (e.g., malicious use, benign application) regarding causal factors across different domains of AI risks. This analysis helps clarify how various stakeholders’ motivations and actions can influence the emergence and impact of AI risks. By understanding these dynamics, stakeholders can better anticipate and mitigate potential risks associated with AI systems.

Discussion

Insights into the AI Risk Landscape

Explores the complexity of the AI risk landscape. The authors highlight its multifaceted nature, influenced by technological, societal, and regulatory factors. They note that current studies often focus on specific AI aspects, leading to a fragmented understanding. This approach may overlook crucial connections between different risk types. The authors advocate for a more holistic perspective considering AI technologies’ broader implications.

Insights from the AI Risk Database and Included Documents

The AI Risk Database compiles risks from various sources, reflecting AI’s rapid evolution and the urgency of addressing risks. Many non-systematic reviews in the database may need more reliability. The authors emphasize the need for structured methodologies in future research. These insights help identify trends, gaps, and areas for further investigation in AI risk assessment.

Implications for Key Audiences

Highlights how the AI Risk Repository impacts key stakeholders. Policymakers, auditors, academics, and industry professionals can use this resource to guide decisions, shape regulations, and develop best practices. The authors emphasize tailoring insights to each group’s needs, ensuring actionable and relevant information for their contexts.

Limitations and Directions for Future Research

The authors acknowledge several limitations of their study, including potential biases in the literature and the rapidly evolving nature of AI technologies. They caution that the repository may not capture all emerging risks, particularly those newly identified or not yet widely recognized. They call for ongoing research to address these limitations and ensure that understanding AI risks remains current and relevant.

Database and Taxonomy

Emphasizes the need for continuous updates and improvements to the AI Risk Database and the associated taxonomies. Advocate for a living database that evolves alongside advancements in AI technology and emerging risks. Suggests that ongoing collaboration among researchers, policymakers, and industry stakeholders is essential to maintaining the repository’s relevance and utility.

Other Opportunities for Future Research

The authors identify additional opportunities for future research, including exploring underrepresented domains of AI risk, investigating the long-term implications of AI technologies, and developing more comprehensive frameworks for risk assessment. They encourage researchers to engage with the AI Risk Repository as a starting point for new studies and contribute to the growing knowledge of AI risks.

GYBN is a youth-led initiative representing young people’s voices in the UN Convention on Biological Diversity (CBD) negotiations. Its goals are threefold: to raise youth awareness about biodiversity’s importance, connect…

A non-profit organization blending science, art, and storytelling to foster planetary awareness and understanding. Its mission is to spark a fundamental shift in thinking toward a safer, fairer, and better…

A nonprofit and international advocacy organization devoted to reducing all-hazards global catastrophic risk (GCR). Fiscally sponsored by Social and Environmental Entrepreneurs (SEE), it envisions a world where all governments have…