BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20221106T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20221208T211935Z
UID:23C171C0-34FC-4C76-8B5A-29C0B877A666
DTSTART;TZID=America/Los_Angeles:20221207T183000
DTEND;TZID=America/Los_Angeles:20221207T200000
DESCRIPTION:Autonomous robots depend on their perception systems to underst
 and the world around them. These machines often leverage a host of sensors
  including cameras\, lidars\, radars\, and ultrasonic sensors to create th
 is environmental understanding. Stereo cameras play a big role in providin
 g depth perception to robotic systems. This depth information can be estim
 ated using classical computer vision techniques\, like semi-global matchin
 g (SGM) or leverage deep neural networks (DNNs). Each individual algorithm
  may struggle in a particular set of operating conditions. But when multip
 le depth estimation algorithms are leveraged simultaneously\, It is possib
 le that more robust depth information can be calculated.\n\nIn this talk\,
  we&#39;ll cover work at NVIDIA to train the ESS DNN model for determining ste
 reo disparity using both synthetic and real-world data to perform well whe
 re SGM may not. We&#39;ll also introduce the Bi3D model which is trained on th
 e simplified question of &quot;is X closer than M meters?&quot; rather than &quot;how far
  away is X?&quot;\, yielding improvements in both accuracy and speed. As every 
 approach has deficiencies on its own\, we&#39;ll touch upon how ensembling the
  responses of ESS and Bi3D\, DNNs developed specifically for robotic perce
 ption with SGM could lead to robust obstacle detection. Finally\, we&#39;ll di
 scuss how we&#39;ve tuned the performance of these models to run on embedded c
 ompute for the responsive stopping behavior required in autonomous mobile 
 robots (AMRs).\n\nSpeaker(s): Hemal Shah\, Gerard Andrews\n\nAgenda: \n6:3
 0 PM Introduction (Tom Coughlin)\n\n6:45 PM Talk\n\n7:30 PM Q&amp;A\n\n8:00 PM
  End\n\nVirtual: https://events.vtools.ieee.org/m/332953
LOCATION:Virtual: https://events.vtools.ieee.org/m/332953
ORGANIZER:tom@tomcoughlin.com
SEQUENCE:14
SUMMARY:Robust depth estimation for robots with stereo cameras using bespok
 e DNNs
URL;VALUE=URI:https://events.vtools.ieee.org/m/332953
X-ALT-DESC:Description: &lt;br /&gt;&lt;p style=&quot;font-weight: 400\;&quot;&gt;Autonomous robo
 ts depend on their perception systems to understand the world around them.
  These machines often leverage a host of sensors including cameras\, lidar
 s\, radars\, and ultrasonic sensors to create this environmental understan
 ding. Stereo cameras play a big role in providing depth perception to robo
 tic systems. This depth information can be estimated using classical compu
 ter vision techniques\, like semi-global matching (SGM) or leverage deep n
 eural networks (DNNs). Each individual algorithm may struggle in a particu
 lar set of operating conditions. But when multiple depth estimation algori
 thms are leveraged simultaneously\, It is possible that more robust depth 
 information can be calculated.&lt;/p&gt;\n&lt;p style=&quot;font-weight: 400\;&quot;&gt;In this 
 talk\, we&#39;ll cover work at NVIDIA to train the ESS DNN model for determini
 ng stereo disparity using both synthetic and real-world data to perform we
 ll where SGM may not. We&#39;ll also introduce the Bi3D model which is trained
  on the simplified question of &quot;is X closer than M meters?&quot; rather than &quot;h
 ow far away is X?&quot;\, yielding improvements in both accuracy and speed. As 
 every approach has deficiencies on its own\, we&#39;ll touch upon how ensembli
 ng the responses of ESS and Bi3D\, DNNs developed specifically for robotic
  perception with SGM could lead to robust obstacle detection. Finally\, we
 &#39;ll discuss how we&#39;ve tuned the performance of these models to run on embe
 dded compute for the responsive stopping behavior required in autonomous m
 obile robots (AMRs).&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;6:30 PM Introduction 
 (Tom Coughlin)&lt;/p&gt;\n&lt;p&gt;6:45 PM Talk&lt;/p&gt;\n&lt;p&gt;7:30 PM Q&amp;amp\;A&lt;/p&gt;\n&lt;p&gt;8:00 
 PM End&lt;/p&gt;
END:VEVENT
END:VCALENDAR

