Robust Multi-Modal Sensor Fusion: An Adversarial Approach

Share

Abstract. Increasingly important Multi-Domain Operations entail multi-modal sensing which in turn, may require much effort to harness the information for exploitation.  As a result, an upsurge in research interest in multi-modal fusion has emerged across industry, and in academia and government. Combining this multi-sensor information has been of particular interest in inference and target detection to enhance performance in challenging and adversarial environments.  While fusion is strictly not new, a more principled approach has only recently been emerging on account of its ubiquitous need.  The viability of these fusion approaches strongly hinges on the simultaneous functionality of all the sensors, limiting their efficacy in a real environment. The severity of this limitation is even more pronounced in unconstrained surveillance settings where the environmental conditions have a direct impact on the sensors, and close manual monitoring is difficult or even impractical. Partial sensor failure can hence cause a major drop in performance in a fusion system in the absence of a timely failure detection. We will describe in this talk a data driven approach to multimodal fusion, where optimal features for each sensor are selected from a hidden latent space among different modalities. This hidden space is learned via a generative network conditioned on individual sensor modalities. The hidden space, as an intrinsic structure, is then exploited as a palliative proxy for not only detecting damaged sensors, but for subsequently safeguarding the performance of the fused sensor system. Experimental results show that such an approach can make an inference system robust against noisy/damaged sensors, without requiring human intervention to inform the system about the damage.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 08 Nov 2021
  • Time: 12:00 PM to 01:00 PM
  • All times are US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • United States
  • Wendy Newcomb

    Chair IEEE SP Atlanta Chapter

    wendy.newcomb@gtri.gatech.edu

    Ryan Bales

    Chair IEEE AESS/GRSS Atlanta chapter

    Ryan.Bales@gtri.gatech.edu

  • Co-sponsored by IEEE SP Atlanta Chapter & IEEE AESS/GRSS Atlanta chapter
  • Starts 06 October 2021 12:00 PM
  • Ends 08 November 2021 01:00 PM
  • All times are US/Eastern
  • No Admission Charge


  Speakers

Dr. Hamid Krim Dr. Hamid Krim of NC State University

Topic:

Robust Multi-Modal Sensor Fusion: An Adversarial Approach

Abstract. Increasingly important Multi-Domain Operations entail multi-modal sensing which in turn, may require much effort to harness the information for exploitation.  As a result, an upsurge in research interest in multi-modal fusion has emerged across industry, and in academia and government. Combining this multi-sensor information has been of particular interest in inference and target detection to enhance performance in challenging and adversarial environments.  While fusion is strictly not new, a more principled approach has only recently been emerging on account of its ubiquitous need.  The viability of these fusion approaches strongly hinges on the simultaneous functionality of all the sensors, limiting their efficacy in a real environment. The severity of this limitation is even more pronounced in unconstrained surveillance settings where the environmental conditions have a direct impact on the sensors, and close manual monitoring is difficult or even impractical. Partial sensor failure can hence cause a major drop in performance in a fusion system in the absence of a timely failure detection. We will describe in this talk a data driven approach to multimodal fusion, where optimal features for each sensor are selected from a hidden latent space among different modalities. This hidden space is learned via a generative network conditioned on individual sensor modalities. The hidden space, as an intrinsic structure, is then exploited as a palliative proxy for not only detecting damaged sensors, but for subsequently safeguarding the performance of the fused sensor system. Experimental results show that such an approach can make an inference system robust against noisy/damaged sensors, without requiring human intervention to inform the system about the damage.

Biography:

Hamid Krim (ahk@ncsu.edu) received his BSc. MSc. And Ph.D. degrees in EE. He was a Member of Technical Staff at AT&T Bell Labs, where he has conducted research and development in the areas of telephony and digital communication systems/subsystems. Following an NSF postdoctoral fellowship at Foreign Centers of Excellence, LSS/University of Orsay, Paris, France, he joined the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA as a Research Scientist and where he was performing and supervising research.  He is presently Professor of Electrical Engineering in the ECE Department, North Carolina State University, Raleigh, leading the Vision, Information and Statistical Signal Theories and Applications group, and recently started a rotation as an IPA at Army Research Office in the Research Triangle Park, NC. His research interests are in statistical signal and image analysis and mathematical modeling with a keen emphasis on applied problems in classification and recognition using geometric and topological tools.  . He was recently awarded (together with M. Viberg) the Best Sustained Impact Paper Award by the IEEE Signal Processing Society for the paper which appeared over 25 years ago in the IEEE Signal Processing Magazine.

Email:





  Media

Dr. Krim Flyer Robust Multi-Modal Sensor Fusion: An Adversarial Approach 170.37 KiB