# ISO 42001 AI Risk Assessment Methodology

> Draft the AI risk assessment methodology for ISO/IEC 42001:2023.



Tags: ISO 42001, Risk Assessment, Methodology, Compliance, AI


## Example Prompts

- Draft our AI risk assessment methodology
- Update our risk criteria and escalation process
- Define how we evaluate ethical risks in AI

URL: https://rakenne.app/skills/iso42001-risk-assessment-methodology/index.md

Try this skill: https://rakenne.app/a/?skill=iso42001-risk-assessment-methodology



## Overview

Draft the AI risk assessment methodology for ISO/IEC 42001:2023.

## Getting Started

Add your source materials to the project workspace, then activate the *ISO 42001 AI Risk Assessment Methodology* skill. The agent will guide you through the process step by step.

## Example conversation

**You:** We need a documented AI risk assessment methodology—criteria, escalation, how we evaluate ethical risks, and how it ties into our overall risk management.

**Agent:** I’ll draft the methodology using the skill reference. I’ll define risk criteria (likelihood and impact scales, acceptability thresholds), how AI risk assessment integrates with the organization’s risk process, escalation rules, methods for evaluating fairness, bias, transparency, and other ethical concerns, risk treatment options, and the review cycle. I’ll align it with your risk register and impact assessment process.

## Output excerpt

### Risk assessment methodology excerpt

**Risk criteria** — Likelihood: 1–5 (rare to almost certain). Impact: 1–5 (negligible to severe). Risk score = L × I; threshold for acceptance without treatment: ≤ 6. Risks above threshold require treatment plan and owner. Ethical impact (e.g. discrimination, explainability) is assessed qualitatively and may raise impact level.

**Integration** — AI risks are recorded in the AI risk register and reported into the organizational risk process [e.g. quarterly]. Escalation: product owner → AIMS owner → [risk committee] for risks above [X].

**Ethical evaluation** — For each AI system we consider: fairness and bias (protected groups, metrics); transparency (user awareness, explainability); accountability (human oversight, override); safety and unintended use. Findings feed risk score and treatment.

**Review** — Methodology reviewed annually; risk register and impact assessments reviewed [e.g. quarterly or on change].

## Extension and validation

This skill does not include custom validation tools. Elaboration is guided by the skill workflow and reference materials.


---

Back to [Skill Library](https://rakenne.app/skills/index.md)
