How Can Neuroscience Help AI/ML?

#eNeuroLearn #KP #Unni #2022 #75 #Kalwani #AI #ML #talk
Share

Inspired by complex systems in nature, we will present some unique features of mammalian sensory systems from a Neuroscience perspective. Many of these algorithmic and architectural features are not in current generation Deep Learning systems. Drawing from our recent work on Neural Networks and Capsule Networks with Generative and Attentional mechanisms, we show how these features can be incorporated into next generation Deep Learning systems.

Real-time sensory processing, scene analysis, and object recognition are needed before vision systems can become practical in Autonomous Vehicles. Multi-modal integration is natural in the next generation Deep Learning systems and we discuss how auditory signals can help with vision in these systems. We end the talk with a discussion on how such systems, with “Neuroscience Inside”, can make Autonomous Vehicles see like humans.

KP Unnikrishnan
eNeuroLearn, Ann Arbor, MI


  Date and Time

  Location

  Hosts

  Registration



  • Date: 19 Apr 2022
  • Time: 06:00 PM to 07:30 PM
  • All times are (GMT-05:00) US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Rochester, Michigan
  • United States 48309-4479
  • Building: Virtual
  • Room Number: Digital

  • Contact Event Host
  • Zoom will be needed for this event!

  • Co-sponsored by Subramaniam Ganesan
  • Starts 30 March 2022 04:42 AM
  • Ends 19 April 2022 04:42 AM
  • All times are (GMT-05:00) US/Eastern
  • No Admission Charge


  Speakers

Dr KP Unnikrishnan Dr KP Unnikrishnan

Topic:

How Can Neuroscience Help AI/ML?

Inspired by complex systems in nature, we will present some unique features of mammalian sensory systems from a Neuroscience perspective. Many of these algorithmic and architectural features are not in current generation Deep Learning systems. Drawing from our recent work on Neural Networks and Capsule Networks with Generative and Attentional mechanisms, we show how these features can be incorporated into next generation Deep Learning systems.

Real-time sensory processing, scene analysis, and object recognition are needed before vision systems can become practical in Autonomous Vehicles. Multi-modal integration is natural in the next generation Deep Learning systems and we discuss how auditory signals can help with vision in these systems. We end the talk with a discussion on how such systems, with “Neuroscience Inside”, can make Autonomous Vehicles see like humans.

KP Unnikrishnan
eNeuroLearn, Ann Arbor, MI

Biography:

KP Unnikrishnan is the co-founder and scientific director of eNeuroLearn, an Ann Arbor based AI/ML startup. eNeuroLearn brings architectures and algorithms from Neuroscience to enhance Deep Learning. He has worked in Neural Networks, Computational Neuroscience, Data Mining, and Deep Learning for the past 35 years. He has a PhD in Physics from Syracuse University and has worked at Bell Labs, Caltech, University of Michigan, General Motors Research, NorthShore University Health System and Ford Motor Company.

 

Email:

Address:Ann Arbor, Michigan, United States





Agenda

6:00 PM - Welcome and Introductions, Chapter business update;

6:05 PM - Start of talk

7:00 PM - Formal end of Talk; Start of Q & A; Group Discussion

7:30 PM - Wrap Up 

 



An IEEE Presentation, open to all 



  Media

Event Flyer How can NeuroScience help AI/ML? 386.55 KiB