BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250611T013818Z
UID:DDB33BFD-1736-435E-B47C-733BCEFC742A
DTSTART;TZID=America/Los_Angeles:20250610T173000
DTEND;TZID=America/Los_Angeles:20250610T183000
DESCRIPTION:In this talk\, Dr. Ali Siahkoohi highlights the risks of the cu
 rrent industrial AI practices involving training large-scale generative mo
 dels on vast amounts of data scraped from the internet. This process unwit
 tingly leads to training newer models on increasing amounts of AI-synthesi
 zed data that is rapidly proliferating online\, a phenomenon Dr. Siahkoohi
  refers to as ``model autophagy&#39;&#39; (self-consuming models). He shows that w
 ithout a sufficient influx of fresh\, real data at each stage of an autoph
 agous loop\, future generative models will inevitably suffer a decline in 
 either quality (precision) or diversity (recall). To mitigate this issue a
 nd inspired by fixed-point optimization\, a penalty to the loss function o
 f generative models is introduced that minimizes discrepancies between the
  model&#39;s weights when trained on real versus synthetic data. Since computi
 ng this penalty would require training a new generative model at each iter
 ation\, a permutation-invariant hypernetwork is proposed to make evaluatin
 g the penalty tractable by dynamically mapping data batches to model weigh
 ts. This ensures scalability and seamless integration of the penalty term 
 into existing generative modeling paradigms\, mitigating biases associated
  with model autophagy. Additionally\, this penalty improves the representa
 tion of minority classes in imbalanced datasets\, which is a key step towa
 rd enhancing fairness in generative models.\n\nSpeaker(s): Ali Siahkoohi\n
 \nAgenda: \n- Invited talk from Dr. Ali Siahkoohi\, Assistant Professor in
  University of Central Florida&#39;s Computer Science Department.\n- Q/A Sessi
 on\n\nVirtual: https://events.vtools.ieee.org/m/486945
LOCATION:Virtual: https://events.vtools.ieee.org/m/486945
ORGANIZER:upalmahbub@yahoo.com
SEQUENCE:45
SUMMARY:Mitigating Biases in Self-consuming Generative Models
URL;VALUE=URI:https://events.vtools.ieee.org/m/486945
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;In this talk\, Dr. Ali Siahkoohi highlight
 s the risks of the current industrial AI practices involving training larg
 e-scale generative models on vast amounts of data scraped from the interne
 t. This process unwittingly leads to training newer models on increasing a
 mounts of AI-synthesized data that is rapidly proliferating online\, a phe
 nomenon Dr. Siahkoohi refers to as ``model autophagy&#39;&#39; (self-consuming mod
 els). He shows that without a sufficient influx of fresh\, real data at ea
 ch stage of an autophagous loop\, future generative models will inevitably
  suffer a decline in either quality (precision) or diversity (recall). To 
 mitigate this issue and inspired by fixed-point optimization\, a penalty t
 o the loss function of generative models is introduced that minimizes disc
 repancies between the model&#39;s weights when trained on real versus syntheti
 c data. Since computing this penalty would require training a new generati
 ve model at each iteration\, a permutation-invariant hypernetwork is propo
 sed to make evaluating the penalty tractable by dynamically mapping data b
 atches to model weights. This ensures scalability and seamless integration
  of the penalty term into existing generative modeling paradigms\, mitigat
 ing biases associated with model autophagy. Additionally\, this penalty im
 proves the representation of minority classes in imbalanced datasets\, whi
 ch is a key step toward enhancing fairness in generative models.&lt;/p&gt;&lt;br /&gt;
 &lt;br /&gt;Agenda: &lt;br /&gt;&lt;ul&gt;\n&lt;li&gt;Invited talk from Dr. Ali Siahkoohi\, Assist
 ant Professor in University of Central Florida&#39;s Computer Science Departme
 nt.&lt;/li&gt;\n&lt;li&gt;Q/A Session&lt;/li&gt;\n&lt;/ul&gt;
END:VEVENT
END:VCALENDAR

