BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20210314T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20201101T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20210219T050350Z
UID:B3F222B0-2162-4F6D-91A4-1F6400637DBE
DTSTART;TZID=America/New_York:20210218T190000
DTEND;TZID=America/New_York:20210218T203000
DESCRIPTION:Machine Learning appears to have made impressive progress on ma
 ny tasks including image classification\, machine translation\, autonomous
  vehicle control\, playing complex games including chess\, Go\, and Atari 
 video games\, and more. This has led to much breathless popular press cove
 rage of Artificial Intelligence\, and has elevated deep learning to an alm
 ost magical status in the eyes of the public. ML\, especially of the deep 
 learning sort\, is not magic\, however. ML has become so popular that its 
 application\, though often poorly understood and partially motivated by hy
 pe\, is exploding. In my view\, this is not necessarily a good thing. I am
  concerned with the systematic risk invoked by adopting ML in a haphazard 
 fashion. Our research at the Berryville Institute of Machine Learning (BII
 ML) is focused on understanding and categorizing security engineering risk
 s introduced by ML at the design level. Though the idea of addressing secu
 rity risk in ML is not a new one\, most previous work has focused on eithe
 r particular attacks against running ML systems (a kind of dynamic analysi
 s) or on operational security issues surrounding ML. This talk focuses on 
 two threads: building a taxonomy of known attacks on ML and the results of
  an architectural risk analysis (sometimes called a threat model) of ML sy
 stems in general. A list of the top five (of 78 known) ML security risks w
 ill be presented.\n\nCo-sponsored by: Northern VA/Washington Joint Section
  Computational Intelligence Society Chapter\n\nSpeaker(s): Gary McGraw\, P
 h.D.\, \n\nAgenda: \n7:00 pm - Opening Comments and Introduction\n\n7:10 p
 m - Presentation\n\n8:10 pm - Extended Questions\n\n8:30 pm - Close\n\nVir
 tual: https://events.vtools.ieee.org/m/258562
LOCATION:Virtual: https://events.vtools.ieee.org/m/258562
ORGANIZER:schulman@computer.org
SEQUENCE:9
SUMMARY:Security Engineering for Machine Learning
URL;VALUE=URI:https://events.vtools.ieee.org/m/258562
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;span style=&quot;caret-color: #000000\; color:
  #000000\; font-family: Helvetica\; font-size: 14px\; font-style: normal\;
  font-variant-caps: normal\; font-weight: normal\; letter-spacing: normal\
 ; orphans: auto\; text-align: start\; text-indent: 0px\; text-transform: n
 one\; white-space: normal\; widows: auto\; word-spacing: 0px\; -webkit-tex
 t-size-adjust: auto\; -webkit-text-stroke-width: 0px\; text-decoration: no
 ne\; display: inline !important\; float: none\;&quot;&gt;Machine Learning appears 
 to have made impressive progress on many tasks including image classificat
 ion\, machine translation\, autonomous vehicle control\, playing complex g
 ames including chess\, Go\, and Atari video games\, and more. This has led
  to much breathless popular press coverage of Artificial Intelligence\, an
 d has elevated deep learning to an almost magical status in the eyes of th
 e public. ML\, especially of the deep learning sort\, is not magic\, howev
 er. &amp;nbsp\;ML has become so popular that its application\, though often po
 orly understood and partially motivated by hype\, is exploding. In my view
 \, this is not necessarily a good thing. I am concerned with the systemati
 c risk invoked by adopting ML in a haphazard fashion. Our research at the 
 Berryville Institute of Machine Learning (BIIML) is focused on understandi
 ng and categorizing security engineering risks introduced by ML at the des
 ign level. &amp;nbsp\;Though the idea of addressing security risk in ML is not
  a new one\, most previous work has focused on either particular attacks a
 gainst running ML systems (a kind of dynamic analysis) or on operational s
 ecurity issues surrounding ML. This talk focuses on two threads: building 
 a taxonomy of known attacks on ML and the results of an architectural risk
  analysis (sometimes called a threat model) of ML systems in general. &amp;nbs
 p\;A list of the top five (of 78 known) ML security risks will be presente
 d. &amp;nbsp\;&lt;/span&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;7:00 pm - Opening Commen
 ts and Introduction&lt;/p&gt;\n&lt;p&gt;7:10 pm - Presentation&lt;/p&gt;\n&lt;p&gt;8:10 pm - Exten
 ded Questions&lt;/p&gt;\n&lt;p&gt;8:30 pm - Close&lt;/p&gt;
END:VEVENT
END:VCALENDAR

