Hope Haruna
Posted: August 28, 2025 • 4 min Read
IBM's AI-driven healthcare solution, Watson for Oncology, once recommended unsafe and incorrect cancer treatments. At Microsoft, chatbots like Tay demonstrate how AI can be manipulated in deployment. Within its 24-hour release, Tay had begun to produce offensive content due to adversarial inputs, highlighting the absence of safeguards against malicious use. These cases indicate the lack of systematic AI risk management and impact assessment.
For organisations aiming to implement ISO/IEC 42001 effectively, particularly those in the financial and healthcare sectors, where AI is integrated, one of the first policies to prioritise is AI Risk Management and Impact Assessment (Clause 6.1, Annex A.5).
This policy defines the types of AI risks your organisation is willing to accept, how those risks will be identified early, and what evidence you will present to auditors and stakeholders that they are being managed responsibly. They are both the seatbelt and dashboard for AI, protecting people from harm while giving leadership clear oversight as innovation moves forward.
Under Clause 6.1 and Annex A.5, for instance, organisations are required to move beyond traditional IT risk policies and adopt AI-specific safeguards that account for bias, data drift, explainability, and misuse. This post will critically explain much deeply what this policy entails.
ISO/IEC 42001-aligned risk policy goes beyond generic IT controls to the specific hazards of AI. It requires that every AI use case from a tiny embedded model to a customer-facing generative assistant, goes through an Impact Assessment before deployment, then into continuous monitoring once live.
If done correctly, the policy will:
A product team proposes an AI feature (say, automated credit limit suggestions). The policy triggers an AI Impact Assessment (AIIA). The team describes the purpose, affected users, data sources, model type, and the decisions the model will influence. Risk owners identify potential harms (e.g., gender or marital-status bias), affected groups, legal hooks, and misuse scenarios (gaming the system, data poisoning, prompt injection).
The model is tested for fairness, robustness and privacy leakage. If protected-class performance deltas exceed your policy threshold (e.g., ≥5 pp disparity in approval rates or ≥20% difference in false-negative rates), shipping is blocked until mitigations land. A fallback plan is defined (human review in edge cases; a rules-based baseline if the model degrades).
The feature goes live with guardrails: drift monitors, bias alerts, adversarial protections, and a kill-switch owned by a named executive. Monthly reviews check outcomes against risk posture, while significant model changes trigger a re-assessment.
These steps (Map, Measure, and Manage) sit under the governance wrapper:Govern (per NIST AI RMF).
At the core of the policy is the AI Risk Register, a centralised log of all AI systems, their risks, and the controls applied.
This register creates transparency and traceability, ensuring lessons learned in one project are applied to the next. It provides executives and regulators with a clear view of where risks lie and how they're being addressed.
Platforms like Riskonnect or RSA Archer are used to maintain and automate these registers.
Your Impact Assessment should be kept short enough that teams actually complete it, but rich enough that auditors and yourself can rely on it. It should have the following:
If personal data is involved, cross-reference your privacy Data Protection Impact Assessment so the two assessments are consistent and non-duplicative.
Use a simple Severity x Likelihood x Detectability matrix. For guidance, see ISO 31000:2018 Risk Management.
ISO/IEC 42001 encourages organizations to integrate structured frameworks. One widely adopted model is the NIST AI Risk Management Framework. This organizes AI risks into four core functions: Govern, Map, Measure, Manage, which helps organizations define policies across areas like fairness, explainability, security, and resilience.
To automate and streamline compliance:
All AI systems must undergo a documented AI Impact Assessment before deployment and whenever significant changes occur. This assessment will identify potential harms, affected populations, and misuse scenarios, while also testing for performance, fairness, robustness, privacy, and security. Appropriate safeguards must be defined in proportion to the level of risk.
AI risks will be assessed based on Severity, Likelihood, and Detectability. Any deployment with a 'High' risk score requires executive approval and must include a human-in-the-loop safeguard as well as a rollback plan.
Once deployed, AI systems must be continuously monitored for model drift, bias, performance degradation, or abuse. If predefined thresholds are breached, the designated owner must take immediate action, including mitigation or system rollback.
A central AI risk register will maintain records of all assessments, test results, approvals, incidents, and system changes throughout the AI lifecycle.
Returning to the Apple Card case, the absence of bias auditing and lineage tracking made it impossible for the bank to explain how decisions were reached. A robust ISO/IEC 42001 policy would have required:
Such measures could have prevented the reputational damage and regulatory scrutiny that followed.
AI risk management isn't a brake on innovation, it's the steering and brakes that let you go faster safely. An ISO/IEC 42001-aligned AI Risk Management & Impact Assessment policy gives your teams clarity, your leaders confidence, and your users protection. Start simple, enforce consistently, and build the evidence trail as you go. The payoff is not just audit-readiness, it's trustworthy AI that holds up in the real world.