New Paradigms in Edge Sensing & Perception Systems
Technical Seminar by Professor Amin Arbabian from Stanford University with the following abstract:
AI-driven machine perception is transforming domains such as robotics, healthcare, consumer electronics, and the IoE. As neural networks scale and edge sensors generate ever-larger data volumes, inference tasks are becoming increasingly resource-intensive, pushing up against computational limits. This talk explores two key aspects of this trend. First, we introduce a new 3D sensing paradigm that exemplifies the dramatic increase in data rates for next-generation physical AI systems. Second, we present a neuroscience-inspired adaptive inference framework that addresses processing bottlenecks in edge-based AI. We derive theoretical bounds and provide empirical results showing 10–100× efficiency gains in vision and language tasks. We further highlight how optimal design of adaptive inference state spaces can unlock even greater computational savings.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- 2356 Main Mall
- Vancouver, British Columbia
- Canada V6T 1Z4
- Building: Macleod Building
- Room Number: 3038
- Click here for Map
- Contact Event Host
-
sudip@ece.ubc.ca
Speakers
Amin Arbabian of Stanford Univeristy
New Paradigms in Edge Sensing & Perception Systems
Abstract: AI-driven machine perception is transforming domains such as robotics, healthcare, consumer electronics, and the IoE. As neural networks scale and edge sensors generate ever-larger data volumes, inference tasks are becoming increasingly resource-intensive, pushing up against computational limits. This talk explores two key aspects of this trend. First, we introduce a new 3D sensing paradigm that exemplifies the dramatic increase in data rates for next-generation physical AI systems. Second, we present a neuroscience-inspired adaptive inference framework that addresses processing bottlenecks in edge-based AI. We derive theoretical bounds and provide empirical results showing 10–100× efficiency gains in vision and language tasks. We further highlight how optimal design of adaptive inference state spaces can unlock even greater computational savings.
Biography:
Amin Arbabian received his Ph.D. from UC Berkeley in 2011 and has been a Professor of Electrical Engineering at Stanford University since 2012, where he also served as Faculty Co-Director of the Stanford SystemX Alliance. Previously, he was a founding engineer at Tagarray (now Maxim Integrated) and designed ultra-low-power wireless transceivers at Qualcomm Corporate R&D. He is also the co-founder of Plato Systems, a Spatial Intelligence platform company. His current research focuses on multi-modality sensing and perception systems, integrating RF/mmWave, ultrasound, and optical signals for applications in industrial operations, autonomy, biomedicine, and AgTech, with programs supported by DARPA, ONR, NSF, DOE/ARPA-E, and NIH.
Professor Arbabian has received the Stanford University Tau Beta Pi Award for Excellence in Undergraduate Teaching, the NSF CAREER Award, the DARPA Young Faculty Award, and the Hellman Faculty Scholarship. He and his students are three-time recipients of the Qualcomm Innovation Fellowship. He has also received multiple Best Paper Awards with students and collaborators from several journals and conferences, such as ISSCC (Jack Kilby Award), the Symposium on VLSI Circuits, the IEEE RFIC Symposium (two-time), the IEEE Biomedical Circuits and Systems Conference, and the IEEE Transactions on Biomedical Circuits and Systems.

Address:Stanford, California, United States