Straintronics: Manipulating nanomagnets with strain for causal intelligence

#straintronics #AI #OpenAI
Share

Artificial intelligence (AI) is ubiquitous (self-driving cars, smart appliances, health monitoring). Estimates by OpenAI predict explosive growth of computational requirements associated with AI by a factor of 100× every two years, which is a 50×faster rate than Moore’s law governing the evolution of the chip industry. As AI becomes increasingly reliant on deep learning neural networks (DNN), energy efficient hardware assumes a position of paramount importance. Present day AI tends to dissipate an enormous amount of energy for training and inference (300 Google searches consume enough energy to boil 1 liter of water at room temperature). Against this backdrop, there is a serious desire to identify a technology that can reduce energy consumption dramatically in DNNs.

A promising candidate for such a technology is “straintronics” which relies on the manipulation of magnetic states in magnetostrictive nanomagnets via electrically-generated strain to elicit myriad non-Boolean computing activities, such as in DNN. The energy-delay product associated with switching a nanomagnet’s magnetic state using strain is ~10^-27 J-s at room temperature, which is one order of magnitude lower than that associated with switching a modern day FINFET, and more than three orders of magnitude lower than that associated with switching magnetization with spin-orbit torques or spin transfer torques in STT-RAM.

Our collaborators and we have developed many constructs for processing and communicating information with straintronics for the purpose of AI. They include neurons and synapses dissipating miniscule amount of energy, compact restricted Boltzmann machines for image classification, ternary content addressable memory with drastically reduced footprint, hardware accelerators for image processing, Bayesian inference engines, correlators/anti-correlators for probabilistic bits, bit comparators for cyber-security applications, analog computing, and (non-volatile) matrix multipliers for machine learning. This talk will describe some of these advances.

 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 14 Oct 2021
  • Time: 07:00 PM to 08:00 PM
  • All times are (GMT-07:00) US/Mountain
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Boise, Idaho
  • United States

  • Contact Event Hosts
  • Starts 30 September 2021 06:00 PM
  • Ends 14 October 2021 08:00 AM
  • All times are (GMT-07:00) US/Mountain
  • No Admission Charge


  Speakers

Commonwealth Professor at Virginia Commonwealth University and directs the Quantum Device Laboratory in the Department of Electrical and Computer Engineering

Topic:

Straintronics: Manipulating nanomagnets with strain for causal intelligence

Artificial intelligence (AI) is ubiquitous (self-driving cars, smart appliances, health monitoring). Estimates by OpenAi predict explosive growth of computational requirements associated with AI by a factor of 100× every two years, which is a 50×faster rate than Moore’s law governing the evolution of the chip industry. As AI becomes increasingly reliant on deep learning neural networks (DNN), energy efficient hardware assumes a position of paramount importance. Present day AI tends to dissipate an enormous amount of energy for training and inference (300 Google searches consume enough energy to boil 1 liter of water at room temperature). Against this backdrop, there is a serious desire to identify a technology that can reduce energy consumption dramatically in DNNs.

A promising candidate for such a technology is “straintronics” which relies on the manipulation of magnetic states in magnetostrictive nanomagnets via electrically-generated strain to elicit myriad non-Boolean computing activities, such as in DNN. The energy-delay product associated with switching a nanomagnet’s magnetic state using strain is ~10^-27 J-s at room temperature, which is one order of magnitude lower than that associated with switching a modern day FINFET, and more than three orders of magnitude lower than that associated with switching magnetization with spin-orbit torques or spin transfer torques in STT-RAM.

Our collaborators and we have developed many constructs for processing and communicating information with straintronics for the purpose of AI. They include neurons and synapses dissipating miniscule amount of energy, compact restricted Boltzmann machines for image classification, ternary content addressable memory with drastically reduced footprint, hardware accelerators for image processing, Bayesian inference engines, correlators/anti-correlators for probabilistic bits, bit comparators for cyber-security applications, analog computing, and (non-volatile) matrix multipliers for machine learning. This talk will describe some of these advances.

Currently supported by the US National Science Foundation under grants CCF-1815033, CCF-2006843 and CCF-2001255

Biography:

Supriyo Bandyopadhyay is Commonwealth Professor at Virginia Commonwealth University and directs the Quantum Device Laboratory in the Department of Electrical and Computer Engineering. He received his bachelor's degree from the Indian Institute of Technology, Kharagpur, India, M.S. degree from the Southern Illinois University in Carbondale, and his Ph.D. from Purdue University, West Lafayette, Indiana. He was a Visiting Assistant Professor at Purdue, Assistant and Associate Professor at University of Notre Dame, Professor at University of Nebraska-Lincoln and now Commonwealth Professor in the Department of Electrical and Computer Engineering at Virginia Commonwealth University with a courtesy appointment as Professor of Physics. Dr. Bandyopadhyay's research is in spintronics, straintronics, energy-efficient computing and nanoscale self-assembly. His self-assembly work was featured in the U.S. Army Research Office Nanoscience Poster in 1997 as one of four notable advances in nanotechnology. Prof. Bandyopadhyay was named Virginia’s Outstanding Scientist by Virginia's Governor Terence R. McAuliffe in 2016. His alma mater, the Indian Institute of Technology, Kharagpur, India named him a distinguished alumnus in 2016. His university bestowed upon him the Distinguished Scholarship Award (given annually to one faculty member in the university) and the University Award of Excellence (the highest honor the university can bestow on a faculty member, given to one individual in a year). His department gave him the Lifetime Achievement Award for sustained contributions to scholarship, education and service (one of two given in the department's history). His earlier employer, University of Nebraska-Lincoln, conferred on him the College of Engineering Research Award (1998), the College of Engineering Service Award (2000), and the Interdisciplinary Research Award (2001). In 2018, he received the State Council of Higher Education for Virginia Outstanding Faculty Award. This is the highest award for educators in private and public universities and colleges in the State of Virginia and recognizes outstanding scholarship, teaching and service. In 2020, he received the Institute of Electrical and Electronics Engineers (IEEE) "Pioneer in Nanotechnology" award. Prof. Bandyopadhyay has authored and co-authored over 400 research publications and presented over 150 invited talks and colloquia across four continents. He has also authored/co-authored three textbooks. He has taught in India under the GIAN program and performed collaborative research there with a VAJRA fellowship from the Government of India. Prof. Bandyopadhyay is a Fellow of IEEE, American Physical Society, Institute of Physics (UK), the Electrochemical Society, and the American Association for the Advancement of Science.





Currently supported by the US National Science Foundation under grants CCF-1815033, CCF-2006843 and CCF-2001255