Skip to content
Menu
Menu
1280x250

OECD Publishes Due Diligence Guidance For Responsible AI

Guidance outlines steps organizations should take to identify, assess, and manage risks associated with AI systems.


The Organisation for Economic Co-operation and Development (OECD) released new guidance outlining how organizations should conduct due diligence when developing, deploying, or using AI systems.

The document, titled OECD Due Diligence Guidance for Responsible AI, provides a framework for organizations to identify, prevent, and address risks associated with AI technologies. The guidance builds on the OECD’s earlier AI principles and is intended to support governments, companies, and other institutions in applying risk-management practices throughout the AI lifecycle.

According to the OECD, the guidance describes practical steps organizations can take to integrate risk-based oversight into AI development and deployment. These steps include establishing internal governance processes, assessing potential impacts on individuals and society, and monitoring systems after deployment.

The document states that organizations should conduct ongoing risk identification and mitigation throughout the lifecycle of an AI system, including design, development, deployment, and operation. It also recommends documenting decisions, maintaining oversight mechanisms, and providing channels for affected parties to raise concerns.

The guidance applies broadly to organizations involved in creating or using AI systems, including technology developers, companies deploying AI tools, and public institutions implementing AI-enabled services. It also addresses actors across supply chains that contribute to the design or operation of AI systems.

The OECD said the framework is intended to align with existing international standards for responsible business conduct. The guidance incorporates a due diligence approach commonly used in other areas of corporate governance, such as environmental and human rights risk management.

The guidance sets out a continuous process that includes embedding responsible conduct into governance systems, identifying and assessing risks, ceasing or mitigating adverse impacts, tracking implementation, communicating how risks are addressed, and enabling remediation when harm occurs.

The OECD said the guidance is designed to help organizations translate high-level AI governance principles into operational practices and to support governments in developing policies or regulations related to AI oversight.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.