Toward Self-supervised Learning of Robotic Manipulation Tasks

#Robotics #Learning
Share

Talk by Professor Abdeslam Boularias


Complex manipulation tasks combine low-level sensorimotor primitives, such as grasping, pushing, and simple arm movements, with high-level reasoning skills, such as deciding which object to grasp next and where to place it. While low-level sensorimotor primitives have been extensively studied in robotics, learning how to perform high-level task planning is relatively less explored. In this talk, I will present a unified framework for learning both low and high-level skills in an end-to-end manner from visual demonstrations of tasks performed by humans. The focus is on tasks that require manipulating several objects in sequence. The presented new techniques not only enhance current robotic capabilities but also set the stage for future advancements where robots can autonomously perform complex tasks in dynamic environments, further closing the gap between human and robotic task execution.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 23 Apr 2024
  • Time: 03:20 PM to 04:40 PM
  • All times are (UTC-04:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
  • ECE Building @NJIT
  • Newark, New Jersey
  • United States 07102
  • Building: ECE
  • Room Number: 202

  • Contact Event Hosts
  • Starts 20 April 2024 12:00 AM
  • Ends 23 April 2024 03:00 PM
  • All times are (UTC-04:00) Eastern Time (US & Canada)
  • No Admission Charge


  Speakers

Abdeslam Boularias of Rutgers University

Topic:

Toward Self-supervised Learning of Robotic Manipulation Tasks

Complex manipulation tasks combine low-level sensorimotor primitives, such as grasping, pushing, and simple arm movements, with high-level reasoning skills, such as deciding which object to grasp next and where to place it. While low-level sensorimotor primitives have been extensively studied in robotics, learning how to perform high-level task planning is relatively less explored. In this talk, I will present a unified framework for learning both low and high-level skills in an end-to-end manner from visual demonstrations of tasks performed by humans. The focus is on tasks that require manipulating several objects in sequence. The presented new techniques not only enhance current robotic capabilities but also set the stage for future advancements where robots can autonomously perform complex tasks in dynamic environments, further closing the gap between human and robotic task execution.

Biography:

Abdeslam Boularias is an Associate Professor in the Department of Computer Science at Rutgers University. He received his engineering degree in computer science from École Nationale Supérieure d'Informatique, Algeria, in 2004, and a Master's degree in computer science from Paris-Sud University, France, in 2005. He completed his Ph.D. at Laval University, Canada, in 2010. He also did a postdoc at the Max Planck Institute for Intelligent Systems in Germany and at CMU before joining Rutgers in 2015. He received multiple awards, such as the best paper award in cognitive robotics at ICRA and the NSF CAREER award.
His current research interests lie in robot learning, particularly in learning from demonstrations and reinforcement learning in robotics. 

Address:United States