Browse Skills

AI Risk & Transparency Auditor

    Get Started with This Skill

    Overview

    Draft and validate a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems under the EU AI Act (Regulation (EU) 2024/1689). This skill targets the three most compliance-critical areas for high-risk AI:

    • Instructions for Use validation — Checks all 13 Art. 13 transparency elements, from intended purpose and capabilities/limitations through human oversight measures, performance metrics, and prohibited uses.
    • Bias mitigation audit — Scans technical documentation for 11 Art. 10 data governance elements, including bias testing methodology, data balancing techniques, fairness metrics, and protected characteristic analysis.
    • Fundamental rights analysis — Guides assessment of impact on all 8 EU Charter rights required by Art. 27, including non-discrimination, effective remedy, and rights of vulnerable groups.

    Getting Started

    Add your existing AI system documentation, Instructions for Use drafts, technical files, or data governance documents to the project workspace. Then activate the AI Risk & Transparency Auditor skill. The agent will guide you through FRIA elaboration section by section — from risk classification and fundamental rights analysis through human oversight design, bias assessment, and final validation.

    Example Conversation

    Below is an example of how a typical FRIA elaboration session unfolds. The agent scopes the AI system, classifies the risk, analyses fundamental rights, validates the Instructions for Use, audits bias documentation, and assembles the complete FRIA.

    You: We need to draft a FRIA for our HR recruitment AI system. It’s a machine learning model that screens incoming CVs and ranks candidates for open positions. We’re a public employment agency in Germany and we need to comply with the EU AI Act before deploying this system.

    Agent: I’ll guide you through the FRIA following the EU AI Act structure. Your system falls under Annex III, category 4: Employment, workers management, and access to self-employment — specifically AI systems used for recruitment or selection of natural persons, which makes it definitively high-risk. Let me confirm a few details:

    1. AI model type: Is this a classification model (accept/reject), a ranking model (scored shortlist), or both?
    2. Input data: What data does the system process? (CV text, structured fields, psychometric data, video?)
    3. Decision autonomy: Does the system make autonomous decisions, or does a human recruiter review all ranked candidates before any action?
    4. Training data: What dataset was the model trained on? Do you have documentation on its composition and demographics?
    5. Scale: How many applications per year will the system process, and across how many Member States?

    You: It’s a ranking model — it scores candidates 0–100 and presents the top 20 to recruiters. It processes CV text and structured data (education, experience, skills). A human recruiter reviews all shortlisted candidates before interviews. It was trained on 500,000 historical hiring decisions from our agency. We process about 200,000 applications per year, Germany only.

    Agent: Important flag: training on historical hiring decisions carries significant bias risk. Historical hiring patterns often encode existing discrimination — for example, if past hiring favoured certain universities, age groups, or implicitly gendered language. Art. 10(2)(f) requires you to examine this dataset for biases affecting fundamental rights.

    Your human-in-the-loop model (recruiters review all shortlisted candidates) is a good starting point for Art. 14 compliance, but we need to verify that recruiters have the ability to override rankings and are trained to recognise automation bias.

    Let me now start the fundamental rights analysis. I’ll go through each of the 8 Charter rights.

    For non-discrimination (Art. 21 Charter) — this is the highest-risk area for an HR ranking system. What protected characteristics have you tested for bias? And what fairness metrics did you use?

    You: We’ve tested for gender bias and age bias using demographic parity. Gender parity ratio is 0.92 and age parity ratio is 0.85. We haven’t tested for other characteristics yet.

    Agent: Two findings:

    1. Age parity ratio of 0.85 is below the commonly accepted 0.80 (four-fifths rule) threshold, but should still be flagged for review — the EU AI Act does not specify a fixed threshold, but a 15% disparity in ranking across age groups needs mitigation. I recommend investigating which age groups are disadvantaged and applying targeted data balancing.

    2. You need to test additional protected characteristics. Art. 10(2)(f) requires examination across all relevant protected characteristics. For an HR system in Germany, this should include at minimum: ethnicity/national origin, disability, and religion — in addition to gender and age. Consider also intersectional effects (e.g., older women, young persons with disabilities).

    I’ve started building the FRIA from the template. Let me now run the bias_mitigation_audit tool on your technical documentation to get a full compliance picture.

    The audit found 6 / 11 elements covered. Missing elements include: protected characteristics analysis (only 2 of 7+ tested), data gaps documentation, ongoing bias monitoring plan, dataset documentation, and statistical significance of bias tests.

    I’ve also run the ai_risk_transparency_auditor on your Instructions for Use draft — transparency coverage is 9 / 13, missing the prohibited uses section, logging capabilities, conformity declaration, and version history. Shall I draft those sections now?

    Sample Output Excerpt

    Below is a representative excerpt from a FRIA, showing the fundamental rights impact analysis and bias assessment summary — the key sections an EU market surveillance authority checks during a FRIA review.


    Fundamental Rights Impact Assessment — HR Recruitment AI

    AI System: CandidateRank v2.1 Annex III Category: Employment, workers management (category 4) Deployer: Federal Employment Agency, Germany Affected persons: ~200,000 job applicants per year

    Fundamental Rights Impact Summary

    Fundamental RightRisk Level (Pre-Mitigation)Mitigation EffectivenessResidual Risk
    Human dignity (Art. 1)MediumHighLow
    Private life (Art. 7)LowHighLow
    Personal data (Art. 8)MediumHighLow
    Non-discrimination (Art. 21)HighMediumMedium
    Rights of the child (Art. 24)LowN/ALow
    Rights of persons with disabilities (Art. 26)HighMediumMedium
    Effective remedy (Art. 47)MediumHighLow
    Good administration (Art. 41)MediumHighLow

    Bias Testing Results

    Protected CharacteristicMetricValueThresholdStatus
    GenderDemographic parity0.92≥ 0.80PASS
    Age (18–25 vs 26–65)Demographic parity0.85≥ 0.80REVIEW
    Age (55+ vs 26–54)Demographic parity0.78≥ 0.80FAIL
    Ethnicity/national originEqualized odds0.91≥ 0.85PASS
    DisabilityDemographic parity0.83≥ 0.80REVIEW

    Human Oversight Implementation

    Art. 14(4) RequirementImplementation
    (a) Understand capabilitiesMandatory 4-hour training for all recruiters; system documentation provided
    (b) Automation bias awarenessWarning displayed with each ranking: “This is an AI-assisted ranking. Review candidates independently.”
    (c) Interpret outputScore breakdown shown per category (education, experience, skills) with confidence indicators
    (d) Override capabilityRecruiters can re-order, add, or remove candidates from the shortlist at any time
    (e) Stop mechanismSystem administrator can disable ranking for any job posting; emergency stop disables system-wide

    Result: REVIEW NEEDED — Age bias (55+ group) and disability parity require additional mitigation measures before deployment.

    Built-in Validation Tools

    The skill includes two extension tools that automate the most compliance-sensitive validations in FRIA and AI Act documentation review.

    What the tools check

    Check Categoryai_risk_transparency_auditorbias_mitigation_audit
    Art. 13 transparency elements (13 items)Intended purpose, capabilities/limitations, human oversight, performance metrics, risks, input data, prohibited uses, logging, version history, provider identity, interpretability, conformity, fundamental rights
    Annex III high-risk category detectionIdentifies which high-risk category the system falls under
    Regulatory reference completenessEU AI Act, EU Charter, GDPR, ISO/IEC 42001EU AI Act, GDPR, EU Charter, ISO/IEC 24027, NIST AI RMF
    Quantitative performance metricsChecks for numerical accuracy/precision/recall values and test conditions
    Art. 10 data governance elements (11 items)Training data, data governance, bias testing, data balancing, fairness metrics, protected characteristics, data gaps, monitoring, dataset docs, special data categories, statistical significance
    Bias mitigation techniquesOversampling, SMOTE, re-weighting, adversarial debiasing, pre/in/post-processing techniques
    Fairness metrics detectionDemographic parity, equalized odds, equal opportunity, disparate impact, predictive parity, calibration
    Protected group coverageRace/ethnicity, gender, age, disability, religion, national origin, socioeconomic status
    Quantitative bias resultsChecks for numerical fairness metric outcomes
    Unfilled placeholdersFlags remaining markersFlags remaining markers

    Severity levels

    • ERROR — A missing required element (e.g., no intended purpose, no bias testing, no human oversight description) that must be resolved before conformity assessment.
    • WARNING — A potential gap such as missing fairness metrics, absent regulatory references, or incomplete protected characteristic coverage that should be reviewed.
    • INFO — Unfilled placeholders, detected techniques/metrics, or informational notes on high-risk classification.

    Example validation output

    ======================================================================
    AI RISK & TRANSPARENCY AUDIT REPORT
    EU AI Act — Instructions for Use (Art. 13)
    ======================================================================
    Document: output/instructions-for-use.md
    Content size: 28,450 chars
    Transparency elements covered: 11 / 13
    
    --- TRANSPARENCY ELEMENT COVERAGE (Art. 13) ---
      ✓ Intended Purpose / Intended Use (Art. 13(3)(a))
      ✓ Provider Identity and Contact Information (Art. 13(3)(a))
      ✓ AI System Capabilities and Limitations (Art. 13(3)(b))
      ✓ Performance Metrics (Art. 15 / Art. 13(3)(b)(i))
      ✓ Human Oversight Measures (Art. 14)
      ✓ Known and Residual Risks (Art. 13(3)(b)(ii))
      ✓ Input Data Specifications (Art. 13(3)(b)(iii))
      ✓ Output Interpretability / Explainability (Art. 13(1))
      ✓ Prohibited Uses / Reasonably Foreseeable Misuse (Art. 13(3)(b)(iv))
      ✓ Automatic Logging Capabilities (Art. 12 / Art. 13(3)(d))
      ✓ Fundamental Rights Impact Considerations (Art. 27 / Art. 29a)
      ✗ Changes and Version History (Art. 13(3)(e))
      ✗ EU Declaration of Conformity / CE Marking Reference (Art. 47–49)
    
    --- HIGH-RISK CLASSIFICATION (Annex III) ---
      [INFO] High-risk AI category detected: "Employment and workers
             management" (Annex III).
    
    --- SUMMARY ---
      Transparency elements covered: 11 / 13
      Errors:   2
      Warnings: 1
      Info:     3
    
    RESULT: NON-COMPLIANT — resolve errors to meet EU AI Act Art. 13
            transparency obligations.
    ======================================================================
    

    The agent runs both tools iteratively — first the ai_risk_transparency_auditor for Instructions for Use transparency completeness, then the bias_mitigation_audit for data governance and fairness verification — and surfaces findings inline so you can resolve gaps before conformity assessment submission.

    Ready to let your expertise drive the workflow?

    Stop wrestling with rigid templates and complex tooling. Write your process in markdown, let the agent handle the rest.

    Get Started