BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20211112T183058Z
UID:EA4128C6-3F43-4C89-A944-CF7129B40FB5
DTSTART;TZID=US/Eastern:20211108T120000
DTEND;TZID=US/Eastern:20211108T130000
DESCRIPTION:Abstract. Increasingly important Multi-Domain Operations entail
  multi-modal sensing which in turn\, may require much effort to harness th
 e information for exploitation. As a result\, an upsurge in research inter
 est in multi-modal fusion has emerged across industry\, and in academia an
 d government. Combining this multi-sensor information has been of particul
 ar interest in inference and target detection to enhance performance in ch
 allenging and adversarial environments. While fusion is strictly not new\,
  a more principled approach has only recently been emerging on account of 
 its ubiquitous need. The viability of these fusion approaches strongly hin
 ges on the simultaneous functionality of all the sensors\, limiting their 
 efficacy in a real environment. The severity of this limitation is even mo
 re pronounced in unconstrained surveillance settings where the environment
 al conditions have a direct impact on the sensors\, and close manual monit
 oring is difficult or even impractical. Partial sensor failure can hence c
 ause a major drop in performance in a fusion system in the absence of a ti
 mely failure detection. We will describe in this talk a data driven approa
 ch to multimodal fusion\, where optimal features for each sensor are selec
 ted from a hidden latent space among different modalities. This hidden spa
 ce is learned via a generative network conditioned on individual sensor mo
 dalities. The hidden space\, as an intrinsic structure\, is then exploited
  as a palliative proxy for not only detecting damaged sensors\, but for su
 bsequently safeguarding the performance of the fused sensor system. Experi
 mental results show that such an approach can make an inference system rob
 ust against noisy/damaged sensors\, without requiring human intervention t
 o inform the system about the damage.\n\nCo-sponsored by: IEEE SP Atlanta 
 Chapter &amp; IEEE AESS/GRSS Atlanta chapter\n\nSpeaker(s): Dr. Hamid Krim\, \
 n\nVirtual: https://events.vtools.ieee.org/m/284533
LOCATION:Virtual: https://events.vtools.ieee.org/m/284533
ORGANIZER:wendy.newcomb@gtri.gatech.edu
SEQUENCE:4
SUMMARY:Robust Multi-Modal Sensor Fusion: An Adversarial Approach
URL;VALUE=URI:https://events.vtools.ieee.org/m/284533
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;. Increasingly im
 portant Multi-Domain Operations entail multi-modal sensing which in turn\,
  may require much effort to harness the information for exploitation.&amp;nbsp
 \; As a result\, an upsurge in research interest in multi-modal fusion has
  emerged across industry\, and in academia and government. Combining this 
 multi-sensor information has been of particular interest in inference and 
 target detection to enhance performance in challenging and adversarial env
 ironments.&amp;nbsp\; While fusion is strictly not new\, a more principled app
 roach has only recently been emerging on account of its ubiquitous need. &amp;
 nbsp\;The viability of these fusion approaches strongly hinges on the simu
 ltaneous functionality of all the sensors\, limiting their efficacy in a r
 eal environment. The severity of this limitation is even more pronounced i
 n unconstrained surveillance settings where the environmental conditions h
 ave a direct impact on the sensors\, and close manual monitoring is diffic
 ult or even impractical. Partial sensor failure can hence cause a major dr
 op in performance in a fusion system in the absence of a timely failure de
 tection. We will describe in this talk a data driven approach to multimoda
 l fusion\, where optimal features for each sensor are selected from a hidd
 en latent space among different modalities. This hidden space is learned v
 ia a generative network conditioned on individual sensor modalities. The h
 idden space\, as an intrinsic structure\, is then exploited as a palliativ
 e proxy for not only detecting damaged sensors\, but for subsequently safe
 guarding the performance of the fused sensor system. Experimental results 
 show that such an approach can make an inference system robust against noi
 sy/damaged sensors\, without requiring human intervention to inform the sy
 stem about the damage.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

