Machine Unlearning for AI Safety

#AI #safety #Security #systems
Share

The ability to selectively remove undesirable learned information (such as private data, copyrighted content, or harmful knowledge that could facilitate the misuse of generative models) is increasingly recognized as a critical capability for trustworthy AI. This process, known as machine unlearning (MU), has become essential as generative models are deployed in sensitive domains including healthcare, defense, personalized education, and autonomous systems. In this talk, I will present a systematic, rigorous, and safety-centered exploration of machine unlearning in modern generative AI systems, with a primary focus on large language models (LLMs). Rather than treating unlearning as an isolated task, we position it as a multidisciplinary frontier shaped by the co-design of optimization, data, and model principles.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar
  • No.28, West Xianning Road
  • Xi'an, Shaanxi
  • China 710049
  • Building: Hongli Building
  • Room Number: 4-7151

  • Contact Event Host
  • Starts 07 November 2025 04:00 PM UTC
  • Ends 09 November 2025 04:00 PM UTC
  • No Admission Charge






Agenda