Rethinking Learning: Beyond Backpropagation Toward Brain-Inspired Computational Intelligence
IEEE VIC CIS Chapter
Abstract:
Backpropagation has powered the modern era of computational intelligence, enabling breakthroughs in perception, language, control, and autonomous systems. Yet as intelligent systems move into dynamic, real-world environments, new demands emerge: continual adaptation, robustness under uncertainty, energy efficiency, and scalable autonomy. These challenges invite a deeper question — are our learning algorithms fundamentally aligned with how intelligence itself operates?
This lecture explores predictive coding as a compelling, brain-inspired alternative for credit assignment in deep systems. Rather than relying on staged forward and backward passes with global error transport, predictive coding formulates learning as the continuous minimization of hierarchical prediction errors through local, parallel, and bidirectional interactions. Recent theoretical advances demonstrate that such dynamics can approximate gradient-based optimization, offering a principled bridge between neuroscience and modern machine learning.
This perspective reframes learning as an energy-minimizing dynamical process, opening new directions in distributed credit assignment, continual learning, robust inference, and neuromorphic implementation. By revisiting the principles of biological intelligence, this lecture argues that the next generation of computational intelligence systems may emerge not from scaling existing algorithms, but from rethinking the foundations of learning itself.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
- Contact Event Host
-
IEEE VIC CIS DLP Talk on Rethinking Learning: Beyond Backpropagation Toward Brain-Inspired Computational Intelligence
- Co-sponsored by IEEE VIC CIS Chapter; IEEE VIC Section
Speakers
Narayan Srinivasa
Rethinking Learning: Beyond Backpropagation Toward Brain-Inspired Computational Intelligence
Backpropagation has powered the modern era of computational intelligence, enabling breakthroughs in perception, language, control, and autonomous systems. Yet as intelligent systems move into dynamic, real-world environments, new demands emerge: continual adaptation, robustness under uncertainty, energy efficiency, and scalable autonomy. These challenges invite a deeper question — are our learning algorithms fundamentally aligned with how intelligence itself operates?
This lecture explores predictive coding as a compelling, brain-inspired alternative for credit assignment in deep systems. Rather than relying on staged forward and backward passes with global error transport, predictive coding formulates learning as the continuous minimization of hierarchical prediction errors through local, parallel, and bidirectional interactions. Recent theoretical advances demonstrate that such dynamics can approximate gradient-based optimization, offering a principled bridge between neuroscience and modern machine learning.
This perspective reframes learning as an energy-minimizing dynamical process, opening new directions in distributed credit assignment, continual learning, robust inference, and neuromorphic implementation. By revisiting the principles of biological intelligence, this lecture argues that the next generation of computational intelligence systems may emerge not from scaling existing algorithms, but from rethinking the foundations of learning itself.
Biography:
Narayan Srinivasa received his Ph.D. from the University of Florida in Gainesville in 1994 and was a Beckman Postdoctoral Fellow in the Human-Computer Intelligent Interaction group at the Beckman Institute, University of Illinois at Urbana-Champaign, from 1994 to 1997. Between 1998 and 2015, he was with HRL Laboratories in Malibu, CA, where he became Principal Scientist and the Director for Neural and Emergent Systems. At HRL, he worked on a wide range of AI projects, including visual perception and computer vision, signal processing and sensor fusion, brain-inspired computing, and robotics. He joined Intel Labs in 2016 as Chief Scientist to lead the development of neuromorphic technology and played a key role in developing the Loihi neuromorphic chip. He then became a Senior Principal AI Engineer and Director of Machine Intelligence Research Programs at Intel Labs, where he was responsible for accelerating Intel Labs' research on high-risk, high-reward problems for Intel. In November of 2025, he joined Arch Systems LLC, where he is the Chief AI Scientist, working on both AI software and hardware problems. He has 126 issued US patents and over 105 articles published in journals, magazines, and conference proceedings. He is a Fellow of the IEE and the AAIA.
Address:Califonia, United States
Virtual ONLY=>
To join the meeting, please register using the VTool link. A Zoom link will be sent to the registered participants. Please do not hesitate to contact the host if you have any queries (Dr Malka N. Halgamuge, malka_nisha@ieee.org).