Designing for AI enabled Audio IoT: A case for performing at the edge


Once confined to cloud servers with practically infinite resources, machine learning is moving into edge devices for various reasons including lower latency, reduced cost, energy efficiency, and enhanced privacy. The time needed to send data to the cloud for interpretation could be prohibitive, such as pedestrian recognition in a self-driving car. The bandwidth needed to send data to the cloud can be costly, not to mention the cost of the cloud service itself, such as speech recognition for voice commands.


Energy is a trade-off between sending data back and forth to server vs. localized processing. Machine learning computations are complex and could easily drain the battery of an edge device if not executed efficiently. Edge decisions also keep the data on-device which is important for user privacy, such as sensitive emails dictated by voice on a smartphone. Audio AI is a rich example of inference at the edge; and a new type of digital signal processor (DSP) specialized for audio machine learning use-cases can enable better performance and new features at  the edge of the network.


Once an edge device is enabled for always-on audio machine learning, it can do more things than speech recognition at low power: contextual awareness such as whether the device is in a crowded restaurant or busy street, ambient music recognition, ultrasonic room recognition, and even recognizing whether someone nearby is shouting or laughing. These types of features will enable new sophisticated use cases that could improve the edge device and benefit the user.

  Date and Time




  • 2800 Scott Blvd
  • Santa Clara, California
  • United States 95050
  • Building: Nvidia Buliding E
  • Click here for Map

Staticmap?size=250x200&sensor=false&zoom=14&markers=37.3796443%2c 121
  • Co-sponsored by


Dr. Jim Steele

Dr. Jim Steele


Designing for AI enabled Audio IoT: A case for performing at the edge


Jim Steele is the VP of Technology Strategy at Knowles Intelligent Audio.  He has a track record of leading successful development of machine learning algorithms, software, hardware, and system engineering for mobile and IoT products.  He joined Knowles through acquisition and prior to that led his motion sensor startup to a successful acquisition as well.  Jim has held senior management positions at Spansion, Polaris Wireless, and ArrayComm working on a variety of complex systems from audio solutions to location-based technology.  He has held research positions in theoretical physics at Massachusetts Institute of Technology and the Ohio State University. He is the lead author of The Android Developer’s Cookbook, which was designed to help application developers start working on the Android mobile operating system. He is also a noted speaker and has given many invited lectures. Jim holds a Ph.D. in theoretical physics from the State University of New York at Stony Brook.



6:30pm - 7:00pm: Registration, Food, Networking
7:00pm - 8:00pm: Talk
8:00pm - 8:30pm: Q&A and Networking