THE JOURNAL | May 2026

Review of Methodologies for AI Risk Management: EASA, NIST AI RMF and HUDERIA

Why the comprehensive approach is the best way to manage AI risks.


By Kseniia Goncharenko | CEO, Well Digit | May 2026

9 min read

Today, AI systems are no longer limited to laboratories. They operate in real environments such as commerce, healthcare, transport and cybersecurity, with the potential to transform society and the genuine capacity to cause harm to individuals, organisations, communities, and even the environment.

AI risks vary in origin and may be long-term or short-term, high or low probability, systemic or localised. There is growing recognition that traditional risk management methods, created for deterministic software, are not sufficient for adaptive, probabilistic systems.

Legal regulation of AI

The EU AI Act (EU Regulation 2024/1689) is the first binding global horizontal regulation on AI. It sets a unified framework for the use and supply of AI systems within the EU and entered into force on 1 August 2024.

Providers of high-risk AI systems should comply with Chapter III, Section 2: Articles 9–15, which cover risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. The Act defines specific obligations for high-risk AI — including the need to implement AI risk management systems.

From August 2026

Requirements for high-risk AI (Annex III) apply — biometrics, critical infrastructure, education, HR, law enforcement, public services, migration, and other listed domains.

From August 2027

The EU AI Act also applies to AI systems that are safety components of regulated products, including aviation.

European lawmakers defined four exceptions where systems are not subject to strict regulation: (a) performing a limited procedural task, (b) only improving a completed human action, (c) identifying patterns in past actions (passive monitoring), and (d) performing a purely preparatory function. To legally apply an exemption, organisations should conduct and document an initial risk assessment.

A detailed overview of AI risk management frameworks

There are many standards and best practices that help organisations manage risks in traditional software or information systems — but AI systems present distinct challenges. AI relies on data that can change over time, sometimes unpredictably, affecting both performance and user trust. The complexity of AI and its operating environments makes it difficult to identify and resolve issues as they occur. Because AI is influenced by both technology and human behaviour, its risks and benefits depend on how technical features interact with social factors: usage, management, integration with other AI systems, and the broader social context.

For EU AI Act compliance, consider three approaches — not because they are the most popular, but because each addresses a different category of risk entirely. They have distinct purposes but complement each other in complex AI implementation.

NIST AI RMF

Universal, cross-industry

A voluntary framework by the National Institute of Standards and Technology (US Department of Commerce). Applies across all sectors.

EASA DS.AI

Aviation only

Aviation industry regulation by EASA — currently Notice of Proposed Amendment NPA 2025-07, aligned with EASA AI Roadmap 2.0.

HUDERIA

Socio-legal, cross-industry

Human Rights, Democracy, and Rule of Law Impact Assessment — non-binding guidance by the Council of Europe Committee on AI (CAI).

NIST AI RMF — the governance basis

The NIST AI Risk Management Framework provides methods that increase the trustworthiness of AI systems and promote responsible design, development, deployment, and use over time. It is divided into two parts. The first establishes a foundation for understanding risk as the probability of an event and the magnitude of its consequences for people, organizations, and ecosystems. The second part, «Core», defines four operational functions:

The four core functions

GOVERN sets organisational policies, roles, and accountability.

MAP identifies who the system affects and what can fail.

MEASURE quantifies risk using existing sector standards.

MANAGE determines action priorities by comparing impact, probability, and available resources.

NIST does not specify what level of risk is acceptable, leaving that to industry standards and organisational judgment. This is intentional: it is a framework for the process, not a standard for the outcome.

Bias management. NIST identifies three bias types: computational and statistical biases (quantifiable through data errors); human cognitive biases (how individuals interpret AI responses in decision-making); and systemic biases (institutional or societal). These are addressed through diverse development teams, continuous fairness evaluation, algorithm testing, and independent benchmarking.

Monitoring. Monitoring is part of lifecycle management focused on reliability and business continuity: tracking new threats, reporting AI issues post-deployment, incident response, recovery, and decommissioning. Continuous monitoring of third-party and pre-trained models is also required.

Why NIST matters in practice. It enables effective corporate governance, allowing businesses to balance innovation with financial and technical risk. It covers AI-specific cybersecurity threats — data poisoning, evasion attacks, model manipulation — and connects with the NIST Cybersecurity, Privacy, and Risk Management Frameworks.

EASA DS.AI — aviation regulation

EASA DS.AI (Detailed Specifications for AI Trustworthiness) is currently proposed as NPA 2025-07 and is expected to become the binding standard for AI trustworthiness in aviation, aligned with the EU AI Act.

It applies to AI classified as Level 1 (support — AI assists a human who decides) and Level 2 (collaborative — human-AI teaming). Level 3, where AI acts autonomously, is not yet covered. The framework uses a defined hazard scale from H1 (unacceptable — potential for fatalities) to H5 (no risk).

EASA's approach includes classification, operational domain definition, risk- and ethics-based assessments, intended behaviour analysis, continuous risk evaluation, and strict technical reliability requirements.

Ethics and psychology. Ethics assessment (based on ALTAI principles) evaluates potential impact on the safety of users and the public. It aims to prevent «de-skilling» (the gradual loss of operator capability), avoid emotional dependence on AI assistants, and ensure high explainability of AI decisions — critical for pilots and flight dispatchers. EASA also requires creation of an internal AI ethics review board.

Data and cybersecurity. EASA establishes strict technical requirements for data management. Recorded data must allow detection of deviations from expected AI behaviour to enable accurate incident investigation and protection from cyberattacks including data poisoning. Cyber threats affecting aviation safety are separately regulated by EASA Part-IS.

AI risk management is expected to integrate into the organisation’s overall Safety Management System and Compliance Monitoring (Quality) System.

HUDERIA — the socio-legal dimension

HUDERIA is positioned as common European structured guidance for government agencies and private companies that develop or deploy AI. It applies across all stages of the AI lifecycle and focuses on assessing risks related to human rights, democracy, and the rule of law. Ukraine is also actively implementing this framework, promoted by the Committee on Digital Transformation of Ukraine.

The core method is COBRA (Context-Based Risk Analysis), which assesses risks based on scale, scope, probability, and reversibility of negative human impact. The process includes four stages: COBRA, Stakeholder Engagement (SEP), Risk and Impact Assessment (RIA), and Mitigation Plan (MP).

Bias and stakeholder engagement. HUDERIA addresses bias through the Stakeholder Engagement Process. It requires direct participation of those affected by the AI, especially vulnerable groups, and mandates «positionality reflection»: developers must examine their own privileges, background, and blind spots to recognise the limits of their own perspective and account for viewpoints missing for an objective assessment of AI impact. Cultural or behavioural changes can alter data distributions, causing trained models to discriminate. HUDERIA requires procedural safeguards and access to remedies for those whose rights are violated.

Monitoring focus. Social and cultural drift: societal changes can alter data distributions, leading to discrimination or reduced efficiency in previously fair models. Triggers for reassessment include not only technical failures but legal changes, new misuse forms, or altered context (repurposing or dual use) — with more frequent review in fast-changing environments.

Why HUDERIA matters. It identifies hidden social, ethical, and legal risks that technical frameworks miss. It helps ensure technology is not only technically accurate but also safe and fair for society and vulnerable groups.

Speed of business innovation vs safety and regulation

Businesses want to implement AI as quickly as possible to gain competitive advantage and cut costs, while regulators push for a slower, more controlled approach. McKinsey's Superagency in the Workplace report (January 2025) calls this the «speed versus safety dilemma». Company leaders want to move faster on AI development but face barriers such as regulations, data leak concerns, algorithmic errors (AI hallucination), and potential accountability.

At the same time, regulators and researchers argue that the old «learning the hard way» approach is no longer acceptable. Implementation should be gradual, with the possibility of a full ban on AI in critical systems until reliable safeguards are in place.

No single framework captures everything. NIST considers AI from the standpoint of corporate governance and engineering reliability. HUDERIA looks at AI in terms of its effects on people, society, and democracy. EASA contributes to the aviation-specific technical base. Their key differences rest in what they assess, their ultimate aims, and how they handle risk tolerance. Each one fills the gaps the others leave.

Managing AI risks is not just a compliance formality; it is a key business decision that helps prevent legal, financial, and reputational consequences.

The comprehensive approach is not about doing three times the paperwork — it is about building a risk picture that is technically credible, organisationally accountable, and socially legitimate.

Kseniia Goncharenko — CEO, Well Digit · welldigit.com