BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20240310T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241103T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240427T162506Z
UID:69F23EE8-48E4-4C16-81E7-F59A729363B3
DTSTART;TZID=America/New_York:20240423T152000
DTEND;TZID=America/New_York:20240423T164000
DESCRIPTION:Complex manipulation tasks combine low-level sensorimotor primi
 tives\, such as grasping\, pushing\, and simple arm movements\, with high-
 level reasoning skills\, such as deciding which object to grasp next and w
 here to place it. While low-level sensorimotor primitives have been extens
 ively studied in robotics\, learning how to perform high-level task planni
 ng is relatively less explored. In this talk\, I will present a unified fr
 amework for learning both low and high-level skills in an end-to-end manne
 r from visual demonstrations of tasks performed by humans. The focus is on
  tasks that require manipulating several objects in sequence. The presente
 d new techniques not only enhance current robotic capabilities but also se
 t the stage for future advancements where robots can autonomously perform 
 complex tasks in dynamic environments\, further closing the gap between hu
 man and robotic task execution.\n\nSpeaker(s): Abdeslam Boularias\n\nRoom:
  202\, Bldg: ECE\, ECE Building @NJIT\, Newark\, New Jersey\, United State
 s\, 07102
LOCATION:Room: 202\, Bldg: ECE\, ECE Building @NJIT\, Newark\, New Jersey\,
  United States\, 07102
ORGANIZER:arnob.ghosh@njit.edu
SEQUENCE:12
SUMMARY:Toward Self-supervised Learning of Robotic Manipulation Tasks
URL;VALUE=URI:https://events.vtools.ieee.org/m/418005
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Complex manipulation tasks combine low-lev
 el sensorimotor primitives\, such as grasping\, pushing\, and simple arm m
 ovements\, with high-level reasoning skills\, such as deciding which objec
 t to grasp next and where to place it. While low-level sensorimotor primit
 ives have been extensively studied in robotics\, learning how to perform h
 igh-level task planning is relatively less explored. In this talk\, I will
  present a unified framework for learning both low and high-level skills i
 n an end-to-end manner from visual demonstrations of tasks performed by hu
 mans. The focus is on tasks that require manipulating several objects in s
 equence. The presented new techniques not only enhance current robotic cap
 abilities but also set the stage for future advancements where robots can 
 autonomously perform complex tasks in dynamic environments\, further closi
 ng the gap between human and robotic task execution.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

