Webinar: Energy-Efficient and Secure AI Hardware
Approximate computing has been widely explored as a means to improve the energy efficiency of deep neural networks (DNNs) for edge AI hardware. However, both accurate and approximate DNNs remain inherently vulnerable to faults and adversarial attacks, and the reliability and adversarial robustness of approximate DNNs remain largely unexplored. This gap presents an opportunity to rethink how we design sustainable and dependable AI hardware. In this seminar, I will discuss our recent advances in post-fabrication fault mitigation methods and the principled selection of hardware approximation techniques to preserve adversarial robustness under attack. I will highlight how we leverage emerging computing paradigms, such as explainable artificial intelligence, neural architecture search, moving target defense, and neuromorphic computing, to ensure reliability, security, and energy efficiency. This seminar outlines a path toward AI hardware that is not only high-performing and energy-aware but also resilient against both hardware faults and security threats.
Biography
Ayesha Siddique is an Assistant Professor at the University of Maine. She received her Ph.D. in Electrical and Computer Engineering with the Dependable Cyber-Physical Systems (DCPS) Laboratory, University of Missouri, Columbia, Missouri, USA, in 2020. Her research interests include reliable, secure, and energy-efficient AI hardware, secure neuromorphic edge intelligence, and explainable AI-guided hardware acceleration. She has several high- quality research papers, including 3 journals in Transactions on Very Large Scale Integration Systems (TVLSI), Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), and four peer-reviewed top-quality conference papers in (ISQED) 2021 (Acceptance rate 33%), DATE 2022 (Acceptance rate 21%), and DATE 2023 (Acceptance rate 22%).
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
Speakers
Ayesha Siddique
Reliable, Secure and Energy-Efficient AI Hardware
Approximate computing has been widely explored as a means to improve the energy efficiency of deep neural networks (DNNs) for edge AI hardware. However, both accurate and approximate DNNs remain inherently vulnerable to faults and adversarial attacks, and the reliability and adversarial robustness of approximate DNNs remain largely unexplored. This gap presents an opportunity to rethink how we design sustainable and dependable AI hardware. In this seminar, I will discuss our recent advances in post-fabrication fault mitigation methods and the principled selection of hardware approximation techniques to preserve adversarial robustness under attack. I will highlight how we leverage emerging computing paradigms, such as explainable artificial intelligence, neural architecture search, moving target defense, and neuromorphic computing, to ensure reliability, security, and energy efficiency. This seminar outlines a path toward AI hardware that is not only high-performing and energy-aware but also resilient against both hardware faults and security threats.
Biography:
Biography
Ayesha Siddique is an Assistant Professor at the University of Maine. She received her Ph.D. in Electrical and Computer Engineering with the Dependable Cyber-Physical Systems (DCPS) Laboratory, University of Missouri, Columbia, Missouri, USA, in 2020. Her research interests include reliable, secure, and energy-efficient AI hardware, secure neuromorphic edge intelligence, and explainable AI-guided hardware acceleration. She has several high- quality research papers, including 3 journals in Transactions on Very Large Scale Integration Systems (TVLSI), Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), and four peer-reviewed top-quality conference papers in (ISQED) 2021 (Acceptance rate 33%), DATE 2022 (Acceptance rate 21%), and DATE 2023 (Acceptance rate 22%).
Email:
Media
| Energy-Efficient_and_Secure_AI_Hardware | 1.82 MiB |