Site icon The ANSI Blog

How NIST AI Framework Can be Used for ISO/IEC 17024 Compliance

Walking through the digital blocks of a NIST AI evolution for ISO/IEC 17024

Artificial Intelligence (AI) is beginning to reshape certification processes from job analysis and test development to exam health checks. While these tools offer speed and efficiency, they also introduce new risks around validity, reliability, and transparency.

The revised ISO/IEC 17024 standard (likely publication January 2026) is expected to include new requirements for certification bodies that use AI in certification activities. The revision sets requirements at a high level, outlining what must be achieved but not prescribing how it should be implemented. Certification bodies will therefore need to interpret and operationalize these requirements within their own systems and processes.

In this context, the NIST AI Risk Management Framework (AI RMF) offers a practical roadmap.  By aligning AI adoption with NIST’s framework, certification bodies can strengthen confidence in their processes and demonstrate compliance with ISO/IEC 17024 expectations.

The revision also makes one principle clear: if AI is used in certification, it must be trustworthy, transparent, and subject to human oversight. AI should be seen as a supportive tool that enables more informed decisions, not as an autonomous agent that replaces human judgment.

ISO/IEC 17024: New Requirements for AI Use

Following requirements from the Draft ISO/IEC 17024 standard relates to the use of AI systems.

  1. Verify, manage and monitor AI to ensure intended results.
  2. Validate outcomes of AI with subject matter experts (SMEs).
  3. Ensure personnel have the necessary competence in the oversight and use of AI.
  4. Provide human oversight of AI.
  5. Demonstrate the validity, reliability and fairness of the use of AI.
  6. Disclose its use of AI and obtain acknowledgement.

Why Certification Bodies Need a Framework

Certification bodies operate in high-stakes environments. Decisions based on flawed AI outputs could:

Applying the NIST AI Risk Management Framework

The NIST AI RMF organizes trustworthy AI into four functions: Govern, Map, Measure, and Manage. Certification bodies can use these functions to align with ISO/IEC 17024’s new requirements.

1. Govern: Set Policies and Responsibilities

ISO/IEC 17024 link: Supports clauses (c-personnel competence in use of AI), (d- human oversight),

2. Map: Understand Context and Intended Use

ISO/IEC 17024 link: Supports clauses: b- SME validation, f-disclosure and acknowledgement.

3. Measure: Evaluate and Document Performance

ISO/IEC 17024 link: Supports clause: e- validity, reliability, fairness.

4. Manage: Monitor and Improve Over Time

ISO/IEC 17024 link: Supports clause: a-verify, manage, monitor.

Documentation and Evidence: A Practical Checklist

Certification bodies should maintain a documented record covering:

This evidence should help meet ISO/IEC 17024 compliance and reinforce credibility with stakeholders.

Conclusion: Trust Through Transparency

Certification bodies cannot treat AI as a black box. Instead, by leveraging the NIST AI RMF, they can establish governance, map risks, measure validity, and manage performance over time.

With the upcoming revision of ISO/IEC 17024, certification bodies that proactively align their AI practices with these principles will be better positioned to demonstrate compliance and maintain the trust of candidates, employers, and regulators in a rapidly evolving landscape.

Exit mobile version