BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Rome
BEGIN:DAYLIGHT
DTSTART:20260329T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251026T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260220T105557Z
UID:9115AD6D-6C08-4F70-849C-5988148693C3
DTSTART;TZID=Europe/Rome:20260217T113000
DTEND;TZID=Europe/Rome:20260217T123000
DESCRIPTION:In this talk\, I will discuss our work related to design of gen
 erative models\, which allow for realistic generation of talking heads. We
  have placed emphasis on disentangling motion from appearance and have lea
 rned motion representations directly from RGB\, without structural represe
 ntations such as facial landmarks or 3D meshes. We have aimed at construct
 ing motion as linear displacement of codes in the latent space. Based on t
 his\, our model LIA (Latent Image Animator) and LIA-X are able to animate 
 images via navigation in the latent space\, allowing for control over gene
 ration.\n\nWhile highly intriguing\, video generation has thrusted upon us
  the imminent danger of deepfakes\, which can offer unprecedented levels o
 f increasingly realistic manipulated videos. Deepfakes pose an imminent se
 curity threat to us all\, and to date\, deepfakes are able to mislead face
  recognition systems\, as well as humans. Hence\, we design generation and
  detection methods in parallel.\n\nSpeaker(s): Dr. Antitza Dantcheva \, \n
 \nRoom: Room N20\, Via V. Volterra 62\, Roma Tre University\, DIIEM\, Rome
 \, Lazio\, Italy\, 00146\, Virtual: https://events.vtools.ieee.org/m/53596
 9
LOCATION:Room: Room N20\, Via V. Volterra 62\, Roma Tre University\, DIIEM\
 , Rome\, Lazio\, Italy\, 00146\, Virtual: https://events.vtools.ieee.org/m
 /535969
ORGANIZER:emanuele.maiorana@uniroma3.it
SEQUENCE:22
SUMMARY:Generation and Detection of Deepfakes
URL;VALUE=URI:https://events.vtools.ieee.org/m/535969
X-ALT-DESC:Description: &lt;br /&gt;&lt;div class=&quot;ant-space-item&quot;&gt;\n&lt;div class=&quot;ant
 -row css-ghru6z&quot; lang=&quot;en&quot; style=&quot;margin-left: -12px\; margin-right: -12px
 \; row-gap: 16px\;&quot;&gt;\n&lt;div class=&quot;ant-col ant-col-24 css-ghru6z&quot; style=&quot;pa
 dding-left: 12px\; padding-right: 12px\;&quot;&gt;\n&lt;div class=&quot;markdown mathjax&quot;&gt;
 \n&lt;p class=&quot;zfr3q&quot; style=&quot;margin: 0cm\; background: #FDFDFD\; vertical-ali
 gn: baseline\;&quot;&gt;&lt;span class=&quot;c9dxtc&quot;&gt;&lt;span style=&quot;font-family: &#39;Arial&#39;\,sa
 ns-serif\; color: #1c1c1c\;&quot;&gt;In this talk\, I will discuss our work relate
 d to design of generative models\, which allow for realistic generation of
  talking heads. We have placed emphasis on disentangling motion from appea
 rance and have learned motion representations directly from RGB\, without 
 structural representations such as facial landmarks or 3D meshes. We have 
 aimed at constructing motion as linear displacement of codes in the latent
  space. Based on this\, our model LIA (Latent Image Animator) and LIA-X ar
 e able to animate images via navigation in the latent space\, allowing for
  control over generation.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;\n&lt;p class=&quot;zfr3q&quot; style=&quot;font-
 size: 16px\; font-style: normal\; font-variant: normal\; font-weight: 400\
 ; letter-spacing: normal\; orphans: 2\; text-indent: 0px\; text-transform:
  none\; widows: 2\; word-spacing: 0px\; -webkit-text-stroke-width: 0px\; w
 hite-space: normal\; background-color: #fdfdfd\; text-decoration: none\; m
 argin: 0px\; outline: none\; color: #1c1c1c\; font-family: &#39;lexend giga&#39;\;
  vertical-align: baseline\; display: block\; line-height: 1.5\; text-align
 : left\; padding-left: 0pt\;&quot;&gt;&lt;span&gt;&lt;span class=&quot;c9dxtc&quot;&gt;&lt;span style=&quot;font
 -family: &#39;Arial&#39;\,sans-serif\; color: #1c1c1c\;&quot;&gt;While highly intriguing\,
  video generation has thrusted upon us the imminent danger of deepfakes\, 
 which can offer unprecedented levels of increasingly realistic manipulated
  videos. Deepfakes pose an imminent security threat to us all\, and to dat
 e\, deepfakes are able to mislead face recognition systems\, as well as hu
 mans. Hence\, we design generation and detection methods in parallel.&lt;/spa
 n&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;
END:VEVENT
END:VCALENDAR

