Does EASA’s AI Initiative require immediate FAA Response???

JDA Aviation Technology Solutions

EASA has issued a Notice of Proposed Amendment (NPA) 2025‑07. This is the first step in a two stage rulemaking effort to build a comprehensive regulatory framework for AI in aviation. EASA’s public consultation on its AI Trustworthiness FrameworkTHE FIRST REGULATORY PROPOSAL OF ITS KIND IN GLOBAL AVIATION. The emerging picture is unusually clear: Europe is positioning itself to set the de facto global standard for certifying and overseeing AI in aviation. It is the first attempt to operationalize the EU AI Act’s high‑risk AI requirements inside aviation regulation.

The proposal focuses on:

  • AI assurance (verification, validation, transparency)
  • Human factors and human‑AI teaming
  • Ethics and explainability
  • Data‑driven AI systems (supervised and unsupervised ML)
  • A future extension to reinforcement learning, symbolic/hybrid AI, and generative AI

This EASA’s NPA is a clear statement of its intent to regulate not just algorithms but operational behavior, human interaction, and system authority. EASA’s supporting material shows that human‑AI teaming is not a side topic; it is the conceptual core. The Agency is developing guidance on:

  • Operational explainability
  • Shared situational awareness
  • Negotiation and argumentation between human and AI agents
  • Adaptive modalities of interaction (spoken language, gestures, procedural language)
  • New typologies of human error introduced by AI systems

This is a major shift: EASA is treating AI not as a tool but as a team member whose behavior must be predictable, auditable, and aligned with human operators.

The consultation’s closing phase is where the real influence happens. Once Issue 02 is finalized, EASA will begin the second NPA in 2026 to integrate the framework into domain‑specific regulations (flight operations, ATM, maintenance, etc.).

For manufacturers, operators, and ANSPs, this means:

  • Early alignment now will reduce certification friction later.
  • AI‑assisted and human‑AI‑teamed systems (Level 1 and Level 2 AI) will face structured assurance requirements.
  • Controlled experimentation” with AI in low‑risk domains will be encouraged, but only within the trustworthiness framework.

EASA is moving faster and more comprehensively than any other aviation regulator. Because the EU AI Act is already law, EASA’s framework is likely to become:

  • A global reference model for AI assurance in aviation
  • A template for ICAO discussions
  • A benchmark that FAA, Transport Canada, and others will need to respond to

The closest FAA analogue to EASA’s AI Trustworthiness Framework is the agency’s newly released Roadmap for Artificial Intelligence Safety Assurance (Version I), published in July 2024. It is not identical in scope or regulatory force, but it is the FAA’s foundational action toward governing AI in aviation.

The FAA’s Equivalent is found in its Roadmap for Artificial Intelligence Safety Assurance, a first statement of its formal strategy for how it intends to evaluate, qualify, certify, and oversee AI systems in aviation. It lays out PRINCIPLES, METHODS, AND A MULTI YEAR PLAN for developing assurance frameworks for AI used in aircraft, air traffic systems, and safety‑critical operations.

Key elements from the FAA document include:

  • A strategy to assure the safety of AI before deployment in aviation systems
  • Recognition that AI introduces challenges because it “achieves performance by learning rather than design
  • A plan to develop methods for both safety of AI and AI for safety (e.g., predictive analytics, anomaly detection)
  • Alignment with Executive Order 14110 (October, 2023) on safe and trustworthy AI
  • Emphasis on incremental deployment, starting with low‑risk applications
  • A distinction between learned AI (fixed models) and learning AI (adaptive models), with the latter requiring continuous monitoring

It’s a reasonable interpretation — but it needs a little nuance to be accurate.

The FAA’s AI Safety Assurance Roadmap is intentionally broad, high‑level, and non‑prescriptive. That part is absolutely true. But the reason for that approach isn’t simply to “encourage innovation.” It’s a mix of strategic caution, regulatory philosophy, and the FAA’s historical pattern of letting industry mature a technology before drawing hard lines.

It is fair to say that the FAA is deliberately avoiding firm boundaries right now, and that does create space for innovation.

  • Avoids defining “acceptable” vs. “unacceptable” AI behaviors
  • Avoids specifying assurance methods
  • Avoids committing to timelines for binding rules
  • Encourages early experimentation in low‑risk domains
  • Emphasizes “learning from deployments” rather than pre‑setting constraints

This is consistent with the FAA’s long‑standing performance‑based, technology‑neutral regulatory philosophy. The FAA openly acknowledges that AI is evolving too quickly for prescriptive rules. They don’t want to lock in requirements that become obsolete or counterproductive. Unlike the EU, the U.S. tends to regulate after industry demonstrates stable patterns of use. History seems to suggest that EASA regulates before deployment; the FAA regulates after operational experience accumulates.

It is notable that the two documents that define the US AI aviation approach were ISSUED BY THE BIDEN ADMINISTRATION. ADMINISTRATOR BEDFORD, if he has not already grabbed onto this CRITICAL AVIATION INITITIVE (maybe Trump’s Secretary of Transportation has brought the assignment within his building?) SHOULD ANNOUNCE THAT THE FAA IS TAKING IMMEDIATE BIG, BEAUTIFUL ACTIONS ON AI

 

 

EASA’s first regulatory proposal on Artificial Intelligence for Aviation is now open for consultation

 

As part of EASA’s AI Programme, the Agency has launched a new Notice of Proposed Amendment (NPA) 2025-07 to provide the industry with technical guidance on how to set the ‘AI TRUSTWORTHINESS’ in line with requirements for high-risk AI systems that are contained in the EU AI Act (Regulation (EU) 2024/1689). The NPA is now open for public consultation for 3 months and your comments at this stage are very important.

The publication is the first step of Rulemaking task (RMT) 0742 that will be followed by a second NPA in 2026 to deploy this generic framework to the regulations of the relevant aviation domains.

This publication will help the aviation community to prepare for the future requirements for AI-based assistance (Level1 AI) and Human-AI teaming (Level2 AI). It addresses guidance on AI assurance, human factors and ethics. It also covers data-driven AI-based systems (supervised and unsupervised machine learning). The framework will be extended in the future to reinforcement learning, knowledge-based technologies, hybrid and generative AI systems. The current proposal lays a flexible and solid foundation for future adaptation as technology evolves.

EASA’s AI Programme Team would like to thank all the stakeholders who participated in the rulemaking group supporting the development of the current NPA and looks forward to working closely with them also on the second NPA.

 

Sandy Murdock

View All Posts by Author