Browse Skills

ISO 42001 AI Impact Assessment

    Get Started with This Skill

    Overview

    Draft AI impact assessments (AIIS/AISIA) for ISO/IEC 42001:2023 and related frameworks.

    Getting Started

    Add your source materials to the project workspace, then activate the ISO 42001 AI Impact Assessment skill. The agent will guide you through the process step by step.

    Example conversation

    You: We need an impact assessment for our recommendation engine—stakeholders, potential harm, data use, and ethics—for the auditor and to align with responsible AI.

    Agent: I’ll draft an AI impact assessment (AIIS) using the skill reference. I’ll cover alignment with responsible AI principles, transparency requirements, data use and sensitive data implications, ethical considerations, potential harm (e.g. discrimination, privacy), and stakeholders affected. I’ll link it to the risk register and system description, then run impact_assessment_check on the draft and fix any missing elements until it passes.

    Output excerpt

    Impact assessment excerpt

    System — Recommendation engine; intended purpose: personalizing content for logged-in users.

    Responsible AI alignment — Fairness: we test for demographic parity and monitor outcomes. Transparency: in-app disclosure that recommendations are AI-driven. Accountability: product owner and human review for high-stakes segments.

    Data use — Inputs: [e.g. engagement signals, segment data]. No special-category data in training. Outputs: ranking and scores; no direct PII in output. Data governance procedure applies; retention as per data policy.

    Potential harm — Filter bubble or over-narrow recommendations (mitigation: diversity in ranking). Unintended bias (mitigation: bias testing and monitoring). Low risk of physical or legal harm; no safety-critical use.

    Stakeholders affected — End users (experience, transparency); [e.g. content partners]. Review: [e.g. annually or on major change].

    Extension and validation

    The skill includes impact_assessment_check, which validates the impact assessment draft for required elements: alignment with responsible AI (fairness, transparency, accountability, safety); transparency requirements; data use and sensitive data implications; ethical considerations; potential harm; stakeholders affected; when the assessment was done and when it will be reviewed. Run it after drafting and address any missing elements before audit.

    Ready to let your expertise drive the workflow?

    Stop wrestling with rigid templates and complex tooling. Write your process in markdown, let the agent handle the rest.

    Get Started