Overview
A systematic evaluation of the potential social, economic, and civil rights impacts of an automated decision-making system before and after deployment.
More in Governance, Risk & Compliance
Compliance as Code
Compliance & RegulationThe practice of expressing regulatory and security compliance requirements as machine-readable policies that can be automatically validated against infrastructure and application configurations.
Risk Management
Risk ManagementThe process of identifying, assessing, and controlling threats to an organisation's capital and operations.
Model Risk Management
GovernanceThe governance framework for identifying, measuring, and mitigating risks arising from AI and analytical models.
AI Audit
Compliance & RegulationAn independent assessment of an AI system's compliance with regulatory requirements, ethical standards, and organisational policies, examining data, models, outputs, and governance.
Responsible AI
GovernanceThe practice of designing, developing, and deploying AI systems with good intention and ethical principles.
AI Impact Assessment
Risk ManagementA systematic evaluation of the potential effects and risks of an AI system before and during its deployment.
Third-Party Risk Management
Risk ManagementThe process of identifying and mitigating risks associated with outsourcing to third-party vendors.
Ethical AI Framework
GovernanceA set of principles, guidelines, and processes that an organisation adopts to ensure its AI systems are developed and deployed in a manner that is fair, transparent, and accountable.