Toward Robust, Interactive, and Human-Aligned AI Systems

#robots #uncertainty #safety #humans
Share

Abstract: Ensuring that AI systems do what we, as humans, actually want them to do, is one of the biggest open research challenges in AI alignment and safety. Dr. Brown's research seeks to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. The way robots and other AI systems behave is often the result of optimizing a reward function. However, manually designing good reward functions is highly challenging and error prone, even for domain experts. Although reward functions for complex tasks are difficult to manually specify, human feedback in the form of demonstrations or preferences is often much easier to obtain. However, human data is often difficult to interpret due to ambiguity and noise. Thus, it is critical that AI systems take into account uncertainty over the human's true intent. Dr. Brown's talk will give an overview of my lab's progress along the following fundamental research areas: (1) efficiently maintaining uncertainty over human intent, (2) directly optimizing behavior to be robust to uncertainty, and (3) actively querying for additional human input to reduce uncertainty.
 


  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • Contact Event Host
  • Starts 05 October 2025 06:00 AM UTC
  • Ends 23 October 2025 10:30 PM UTC
  • No Admission Charge


  Speakers

Dr. Brown

Topic:

Biography

Dr. Daniel Brown is an assistant professor in the Kahlert School of Computing and the Robotics Center at the University of Utah. He received a AAAI New Faculty Highlights Award in 2025, an NIH Trailblazer award in 2024, and was named a Robotics Science and Systems Pioneer in 2021. Daniel’s research focuses on human-robot interaction, human-AI alignment, and robot learning. His goal is to develop AI systems that can safely and efficiently interact with, learn from, teach, and empower human users. His research spans reward and preference learning, human-in-the-loop machine learning, and AI safety, with applications in assistive and medical robotics, personal AI assistants, swarm robotics, and autonomous driving. He completed his postdoc at UC Berkeley in 2022, his Ph.D. in Computer Science from UT Austin in 2020.