BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230721T140544Z
UID:603326A8-E43E-4EE7-ADAD-9E2386EB50F3
DTSTART;TZID=America/Chicago:20230720T183000
DTEND;TZID=America/Chicago:20230720T200000
DESCRIPTION:This is a weekly session of the CIT Summer Series\, with Nael A
 bu-Ghazaleh presenting Security challenges and opportunities at the Inters
 ection of Architecture and ML/AI :\n\nMachine learning is an increasingly 
 important computational workload as data-driven deep learning models are b
 ecoming increasingly important in a wide range of application spaces. Comp
 uter systems\, from the architecture up\, have been impacted by ML in two 
 primary directions: (1) ML is an increasingly important computing workload
 \, with new accelerators and systems targeted to support both training and
  inference at scale\; and (2) ML supporting architecture decisions\, with 
 new machine learning based algorithms controlling systems to optimize thei
 r performance\, reliability and robustness. In this talk\, I will explore 
 the intersection of security\, ML and architecture\, identifying both secu
 rity challenges and opportunities. Machine learning systems are vulnerable
  to new attacks including adversarial attacks crafted to fool a classifier
  to the attacker’s advantage\, membership inference attacks attempting t
 o compromise the privacy of the training data\, and model extraction attac
 ks seeking to recover the hyperparameters of a (secret) model. Architectur
 e can be a target of these attacks when supporting ML\, but also provides 
 an opportunity to develop defenses against them\, which I will illustrate 
 with three examples from our recent work. First\, I show how ML based hard
 ware malware detectors can be attacked with adversarial perturbations to t
 he Malware and how we can develop detectors that resist these attacks. Sec
 ond\, I will also show an example of a microarchitectural side channel att
 acks that can be used to extract the secret parameters of a neural network
  and potential defenses against it. Finally\, I will also discuss how arch
 itecture can be used to make ML more robust against adversarial and member
 ship inference attacks using the idea of approximate computing. I will con
 clude with describing some other potential open problems.\n\nSpeaker(s): N
 ael Abu-Ghazaleh\, \n\nVirtual: https://events.vtools.ieee.org/m/364001
LOCATION:Virtual: https://events.vtools.ieee.org/m/364001
ORGANIZER:mviron@findaschool.net
SEQUENCE:39
SUMMARY:CIT Summer Series - Nael Abu-Ghazaleh - Security challenges and opp
 ortunities at the Intersection of Architecture and ML/AI
URL;VALUE=URI:https://events.vtools.ieee.org/m/364001
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;This is a weekly session of the CIT Summer
  Series\, with Nael Abu-Ghazaleh presenting &lt;strong&gt;Security challenges an
 d opportunities at the Intersection of Architecture and ML/AI&lt;/strong&gt;&amp;nbs
 p\;:&lt;/p&gt;\n&lt;p&gt;Machine learning is an increasingly important computational w
 orkload as data-driven deep learning models are becoming increasingly impo
 rtant in a wide range of application spaces. Computer systems\, from the a
 rchitecture up\, have been impacted by ML in two primary directions: (1) M
 L is an increasingly important computing workload\, with new accelerators 
 and systems targeted to support both training and inference at scale\; and
  (2) ML supporting architecture decisions\, with new machine learning base
 d algorithms controlling systems to optimize their performance\, reliabili
 ty and robustness. In this talk\, I will explore the intersection of secur
 ity\, ML and architecture\, identifying both security challenges and oppor
 tunities. Machine learning systems are vulnerable to new attacks including
  adversarial attacks crafted to fool a classifier to the attacker&amp;rsquo\;s
  advantage\, membership inference attacks attempting to compromise the pri
 vacy of the training data\, and model extraction attacks seeking to recove
 r the hyperparameters of a (secret) model. Architecture can be a target of
  these attacks when supporting ML\, but also provides an opportunity to de
 velop defenses against them\, which I will illustrate with three examples 
 from our recent work. First\, I show how ML based hardware malware detecto
 rs can be attacked with adversarial perturbations to the Malware and how w
 e can develop detectors that resist these attacks. Second\, I will also sh
 ow an example of a microarchitectural side channel attacks that can be use
 d to extract the secret parameters of a neural network and potential defen
 ses against it. Finally\, I will also discuss how architecture can be used
  to make ML more robust against adversarial and membership inference attac
 ks using the idea of approximate computing. I will conclude with describin
 g some other potential open problems.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

