BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Mexico/General
BEGIN:DAYLIGHT
DTSTART:20210404T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211031T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20210811T013822Z
UID:5E8FC4BD-1018-40E6-85F9-3639174AD510
DTSTART;TZID=Mexico/General:20210713T210000
DTEND;TZID=Mexico/General:20210713T223000
DESCRIPTION:Abstract\n\nIn some applications\, the domain of interest (i.e.
 \, the target domain) contains very few or even no labeled samples\, while
  an existing domain (i.e.\, the auxiliary domain) is often available with 
 a large number of labeled examples. For example\, millions of loosely labe
 led Flickr photos or YouTube videos can be readily obtained by using keywo
 rds-based search. On the other hand\, while users may be interested in ret
 rieving and organizing their own multimedia collections of images and vide
 os at the semantic level\, they may be reluctant to put forth the effort t
 o annotate their photos and videos by themselves. This problem becomes fur
 thermore challenging because the feature distributions of training samples
  from the web domain and consumer domain may differ tremendously in statis
 tical properties. To explicitly cope with the feature distribution mismatc
 h for the samples from different domains\, in this talk I will introduce o
 ur visual domain adaptation approaches under different settings and also d
 escribe their interesting applications in image and video recognition.\n\n
 Speaker(s): Dr. Dong Xu\, \n\nGuadalajara\, Jalisco\, Mexico\, Virtual: ht
 tps://events.vtools.ieee.org/m/270836
LOCATION:Guadalajara\, Jalisco\, Mexico\, Virtual: https://events.vtools.ie
 ee.org/m/270836
ORGANIZER:r.calderonr@ieee.org
SEQUENCE:3
SUMMARY:Visual Domain Adaptation\, by Dr. Dong Xu
URL;VALUE=URI:https://events.vtools.ieee.org/m/270836
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;In some 
 applications\, the domain of interest (i.e.\, the target domain) contains 
 very few or even no labeled samples\, while an existing domain (i.e.\, the
  auxiliary domain) is often available with a large number of labeled examp
 les. For example\, millions of loosely labeled Flickr photos or YouTube vi
 deos can be readily obtained by using keywords-based search. On the other 
 hand\, while users may be interested in retrieving and organizing their ow
 n multimedia collections of images and videos at the semantic level\, they
  may be reluctant to put forth the effort to annotate their photos and vid
 eos by themselves. This problem becomes furthermore challenging because th
 e feature distributions of training samples from the web domain and consum
 er domain may differ tremendously in statistical properties. To explicitly
  cope with the feature distribution mismatch for the samples from differen
 t domains\, in this talk I will introduce our visual domain adaptation app
 roaches under different settings and also describe their interesting appli
 cations in image and video recognition.&lt;/p&gt;\n&lt;p&gt;&amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

