The promise of AI’s power and application of Aviation’s 10⁻⁹ safety risk standard

JDA Aviation Technology Solutions

 

EASA held a very instructive seminar on the subject OF “ETHICS FOR AI IN AVIATION” and the information garnered from those sessions are reported below. A fair summation of this event is that the Aviation Safety Audience was excited by the prospects of this new computing capability, but expressed strong questions about AI’s privacy, safety, oversight and de-skilling.

In an industry that measures acceptable risk as 10⁻⁹ per flight hour—or one-in-a-billion chance of catastrophic failure—is very risk adverse. The extent of the unknowns inherent in using AI’s powerful computing capacity is still a conundrum. AI DEPENDS ON WHAT IT IS TAUGHT. Even basic information, entered in the learning phase, can have embedded data points which unconsciously carry with them unintended consequences. Designing AI systems must define- with absolute certainty- that reliably pursue safety goals.

Brian Christian’s The Alignment Problem: Machine Learning and Human Values traces the history of artificial intelligence development. The alignment problem refers to the difficulty of designing AI systems that reliably pursue goals aligned with human values. His research and analysis formulated the following guidance which aerospace must be mindful:

  • Value Loading: Embedding human values into AI is complex and often ambiguous.
  • Reward Hacking: AI systems can exploit loopholes in their reward structures, achieving goals in ways that defy human expectations.
  • Specification Problem: Precisely defining what we want AI to do is harder than it seems, especially in dynamic or ambiguous environments.
  • Case studies include autonomous vehicles, predictive policing, and healthcare diagnostics. His examples show how misaligned AI can reinforce bias, reduce transparency, or cause harm despite good intentions.
  • One of his most significant insights is the AI pioneers eventually recognized the need for collaboration across computer science, philosophy, psychology, and public policy. These experts reviewed the AI outcomes. They discovered AI’s application of absolute logic resulted in decisions that were not consistent with broader values.

Aviation’s reliance on a 10⁻⁹ standard may pose challenges as it works its way through training and implementation of AI.

 

Aviation industry cautiously optimistic about AI, finds EASA survey

 

By Ben Sampson

 

Image: Adobe stock / metamorworks

A survey by European aviation regulator EASA has found that aviation professionals feel “CAUTIOUSLY OPTIMISTIC” about AI’s use in their industry, with their top concerns being privacy, safety, oversight and de-skilling.

The survey of 231 aviation professionals was run in January 2024 and explored eight hypothetical AI application scenarios, asking questions about levels of comfort, trust, and acceptance.

On average, professionals rated their acceptance of AI at 4.4 out of 7, reflecting cautious optimism about its potential while raising concerns about its risks. Around two-thirds of respondents expressed reservations, rejecting at least one AI scenario.

Respondents were most CONCERNED ABOUT THE LIMITS OF AI PERFORMANCE, DATA PROTECTION AND PRIVACY, ACCOUNTABILITY, AND POSSIBLE SAFETY IMPLICATIONS.

The survey also found that a STRONG MAJORITY emphasized the NEED FOR ROBUST REGULATION AND SUPERVISION by EASA and national aviation authorities for AI. In addition, participants were concerned that human knowledge and abilities could degrade if AI took over tasks partially or fully.

The full report is available on the EASA website: Ethics for Artificial Intelligence in Aviation – Aviation Professionals Survey Results 2024/2025 | EASA.↑

Commenting on the survey, GUILLAUME SOUDAIN, EASA’s AI programme manager said, “AI offers tremendous opportunities to improve aviation safety and efficiency, but trust is critical.

“This survey underscores the importance of a BALANCED REGULATORY FRAMEWORK, one that ensures the highest level of safety for citizens while also fostering innovation and competitiveness in Europe’s aviation sector.”

AI Days event

Speaking at EASA’s AI Days event in Cologne last week where the survey was presented and discussed, CHRISTINE BERG, head of unit – aviation safety at the European Commission said, “AI is being deployed to optimize traffic flows, enhance predictive maintenance, and enable autonomous systems. The potential is vast. BUT SO ARE THE SAFETY AND CERTIFICATION CHALLENGES.

 

Aviation is safety-critical by definition. This means we need systems that are not only intelligent, but also EXPLAINABLE, RELIABLE, AND CERTIFIABLE.”

During the AI Days event, EASA representatives shared their work on AI assurance, human factors and ethics-based assessment. A set of workshops gathered feedback from industry, authorities, research centres, and universities to inform EASA’s “AI Roadmap”.

Talks from the FAA and EUROCAE provided a broader perspective on ways to apply AI aviation developments and reinforced the collaborative spirit between these organisations and EASA.

The second day of the conference focused on AI research. Boeing shared its investigation of an auto-taxiing AI-based system and Lufthansa Group talked about its investigation of a Large Language Model (LLM)-based troubleshooting assistant.

The day also included discussion panels on human oversight in Level 3 AI and on AI-based operational tools within approved aviation organisations.

Three projects being carried out under the auspices of SESAR-JU were presented and discussed: JARVIS (Just a Rather Very Intelligent System) by Collins, DARWIN (Digital Assistants for Reducing Workload and Collaboration) by Honeywell and SynthAIR (Improved ATM automation and simulation through AI-based universal models for synthetic data generation) by SINTEF.

Recordings and presentations from the AI Days conference can be found here.

 

Sandy Murdock

View All Posts by Author