BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Shanghai
BEGIN:STANDARD
DTSTART:19910915T010000
TZOFFSETFROM:+0900
TZOFFSETTO:+0800
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251221T065748Z
UID:25496A6D-D6A4-4DCB-A2F0-E71A46EFBFB7
DTSTART;TZID=Asia/Shanghai:20251110T100000
DTEND;TZID=Asia/Shanghai:20251110T113000
DESCRIPTION:The ability to selectively remove undesirable learned informati
 on (such as private data\, copyrighted content\, or harmful knowledge that
  could facilitate the misuse of generative models) is increasingly recogni
 zed as a critical capability for trustworthy AI. This process\, known as m
 achine unlearning (MU)\, has become essential as generative models are dep
 loyed in sensitive domains including healthcare\, defense\, personalized e
 ducation\, and autonomous systems. In this talk\, I will present a systema
 tic\, rigorous\, and safety-centered exploration of machine unlearning in 
 modern generative AI systems\, with a primary focus on large language mode
 ls (LLMs). Rather than treating unlearning as an isolated task\, we positi
 on it as a multidisciplinary frontier shaped by the co-design of optimizat
 ion\, data\, and model principles.\n\nAgenda: \n[]\n\nRoom: 4-7151\, Bldg:
  Hongli Building\, No.28\, West Xianning Road\, Xi&#39;an\, Shaanxi\, China\, 
 710049
LOCATION:Room: 4-7151\, Bldg: Hongli Building\, No.28\, West Xianning Road\
 , Xi&#39;an\, Shaanxi\, China\, 710049
ORGANIZER:chaoshen@mail.xjtu.edu.cn
SEQUENCE:9
SUMMARY:Machine Unlearning for AI Safety
URL;VALUE=URI:https://events.vtools.ieee.org/m/512929
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;The ability to selectively remove undesira
 ble learned information (such as private data\, copyrighted content\, or h
 armful knowledge that could facilitate the misuse of generative models) is
  increasingly recognized as a critical capability for trustworthy AI. This
  process\, known as machine unlearning (MU)\, has become essential as gene
 rative models are deployed in sensitive domains including healthcare\, def
 ense\, personalized education\, and autonomous systems. In this talk\, I w
 ill present a systematic\, rigorous\, and safety-centered exploration of m
 achine unlearning in modern generative AI systems\, with a primary focus o
 n large language models (LLMs). Rather than treating unlearning as an isol
 ated task\, we position it as a multidisciplinary frontier shaped by the c
 o-design of optimization\, data\, and model principles.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Age
 nda: &lt;br /&gt;&lt;p&gt;&lt;img src=&quot;https://events.vtools.ieee.org/vtools_ui/media/dis
 play/b73583e3-9a0c-4918-9789-4ddbaf560b93&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

