Human-AI Teaming for Safety: Lessons from Aviation and Energy Case Studies
As intelligent systems play a larger role in safety-critical domains, they must move beyond acting as “black boxes” and become capable teammates to humans. In aviation, pilots are trained on Crew Resource Management (CRM) skills to collaborate effectively as teammates and reduce risk. By giving intelligent systems similar skills, Human-AI Teaming (HAT) can provide similar effectiveness and safety. In this talk, I will present aerospace case studies of pilot support and spacesuit assistant systems where HAT features such as operator-directed management and two-way communication informed design and evaluation. These HAT features led to reduced workload while maintaining safety, and point the way to safer, more collaborative AI systems. Like aviation, the energy sector also can require rapid human-AI coordination under high-stakes conditions. I will conclude with a guided discussion on how these HAT features can be applied to results from a PNNL human-AI team study using the IEEE 118 Bus System.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
Speakers
Uknown
Human-AI Teaming for Safety: Lessons from Aviation and Energy Case Studies
As intelligent systems play a larger role in safety-critical domains, they must move beyond acting as “black boxes” and become capable teammates to humans. In aviation, pilots are trained on Crew Resource Management (CRM) skills to collaborate effectively as teammates and reduce risk. By giving intelligent systems similar skills, Human-AI Teaming (HAT) can provide similar effectiveness and safety. In this talk, I will present aerospace case studies of pilot support and spacesuit assistant systems where HAT features such as operator-directed management and two-way communication informed design and evaluation. These HAT features led to reduced workload while maintaining safety, and point the way to safer, more collaborative AI systems. Like aviation, the energy sector also can require rapid human-AI coordination under high-stakes conditions. I will conclude with a guided discussion on how these HAT features can be applied to results from a PNNL human-AI team study using the IEEE 118 Bus System.
Biography:
I bridge human cognition and AI autonomy to make complex systems usable, safe, and trusted. As Principal Investigator on NASA-funded autonomy programs, I led the design of frameworks that reduced pilot workload and improved human-readiness levels of next-gen flight decks. My passion is applying these same principles to AI systems — ensuring they amplify human performance, not replace it.
I partner with cross-functional teams to translate human factors insights into system-level improvements, aligning AI behavior with human intent and mission goals.
Current book I'm reading: "Co-Intelligence: Living and Working with AI", by Ethan Mollick
CV at http://bit.ly/matessa_cv
Let's connect if you're working on human-centered AI challenges in complex operational environments.
Email: mmatessa@gmail.com