Training Neural Networks with In-Memory-Computing Hardware and Multi-Level Radix-4 Inputs

#STEM #academic #academia #engineering #remote #sensing #antennas #propagation #electronics #directed #energy #radar #communications #sensors #AI #ML #deep #neural #network #in-memory #computing
Share

Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 07 Jun 2024
  • Time: 03:00 PM to 04:00 PM
  • All times are (UTC-04:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Hosts
  • timothy.wolfe@afit.edu

    tswolfe@ieee.org

  • Co-sponsored by Wright-Patt Multi-Intelligence Development Consortium (WPMDC), The DOD & DOE Communities


  Speakers

Christopher Grimm

Topic:

Training Neural Networks with In-Memory-Computing Hardware and Multi-Level Radix-4 Inputs

Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches.

Biography:

Major Christopher L. Grimm Jr is the Raider Institute Director of Operations for the Weapon Systems Integration Branch at Headquarters Air Force, Department of the Air Force Rapid Capabilities Office (DAF RCO), Wright Patterson AFB, Dayton, OH. He leads a team of one hundred engineers, program managers, cyber security professionals, testers, and software developers across a variety of future oriented digital innovation efforts including DevOps pipelines, data platform, and artificial intelligence and machine learning for a $4.5 billion dollar software, cyber-physical systems, digital infrastructure, and integration portfolio. He was previously the Advanced Computing Project Lead for the Advanced Systems and Technology Directorate at the National Reconnaissance Office (NRO/AS&T). While in this role he trailblazed novel computing technologies for space including neuromorphic computing.  Major Grimm received his commission from the U.S. Air Force Academy in 2014. He is a program manager and member of the Acquisition Corps and served with the Intelligence Community working space acquisitions. He has experience in modern software development processes, classified cloud computing and data analytics, cyber red-teaming, system resilience and cyber security, space communications resilience strategy, and novel computing technologies. He is a recognized expert in deep learning training methods for novel computing technologies with multiple publications in IEEE. 





Agenda

Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches.



Please pass the word & invite others.

-----------------------