BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Shanghai
BEGIN:STANDARD
DTSTART:19910915T010000
TZOFFSETFROM:+0900
TZOFFSETTO:+0800
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241206T050232Z
UID:FA16A3E7-C739-4264-91EE-5A1CB4EF78C7
DTSTART;TZID=Asia/Shanghai:20241129T160000
DTEND;TZID=Asia/Shanghai:20241129T170000
DESCRIPTION:Abstract\n\nWe are developing a versatile and efficient service
  robot designed to assist the elderly and individuals in need with their d
 aily tasks. The robot is capable of performing actions such as picking up 
 a water cup and opening doors\, with plans for more advanced interactions 
 in the future. By leveraging our advanced VLA (Vision-Language-Action) fou
 ndation model\, we have achieved promising results in manipulation tasks\,
  demonstrating its effectiveness in handling everyday objects. A key innov
 ation in our approach is the generalizable robot manipulation demonstratio
 n. Once pre-trained\, our robot foundation model can be adapted to various
  new objects\, environments\, and different robot platforms using few-shot
  learning techniques. This capability allows the robot to quickly learn an
 d adapt to its surroundings\, enhancing its utility and effectiveness in r
 eal-world scenarios. By integrating large action foundation models\, we ai
 m to create a service robot that not only performs tasks efficiently but a
 lso interacts meaningfully with people\, ultimately improving the quality 
 of life.\n\nSpeaker(s): Dr. Jianlong Fu\, Principal Research Manager\, Mic
 rosoft Research Asia\, \n\nVirtual: https://events.vtools.ieee.org/m/44143
 3
LOCATION:Virtual: https://events.vtools.ieee.org/m/441433
ORGANIZER:charlotte.kobert@ieee.org
SEQUENCE:22
SUMMARY:Embodied Vision\, Language\, and Action Models for Consumer Applica
 tions
URL;VALUE=URI:https://events.vtools.ieee.org/m/441433
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;Default&quot; style=&quot;margin-bottom: 1.0p
 t\;&quot;&gt;&lt;strong&gt;&lt;u&gt;&lt;span style=&quot;font-size: 11.0pt\; color: #404040\; mso-them
 ecolor: text1\; mso-themetint: 191\;&quot;&gt;Abstract &lt;/span&gt;&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;\n&lt;
 p class=&quot;MsoNormal&quot;&gt;&lt;span lang=&quot;EN-CA&quot; style=&quot;font-size: 12pt\; font-famil
 y: Arial\, sans-serif\; color: rgb(64\, 64\, 64)\;&quot;&gt;We are developing a ve
 rsatile and efficient service robot designed to assist the elderly and ind
 ividuals in need with their daily tasks. The robot is capable of performin
 g actions such as picking up a water cup and opening doors\, with plans fo
 r more advanced interactions in the future. By leveraging our advanced VLA
  (Vision-Language-Action) foundation model\, we have achieved promising re
 sults in manipulation tasks\, demonstrating its effectiveness in handling 
 everyday objects. A key innovation in our approach is the generalizable ro
 bot manipulation demonstration. Once pre-trained\, our robot foundation mo
 del can be adapted to various new objects\, environments\, and different r
 obot platforms using few-shot learning techniques. This capability allows 
 the robot to quickly learn and adapt to its surroundings\, enhancing its u
 tility and effectiveness in real-world scenarios. By integrating large act
 ion foundation models\, we aim to create a service robot that not only per
 forms tasks efficiently but also interacts meaningfully with people\, ulti
 mately improving the quality of life.&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

