Security Engineering for Machine Learning

#Security. #Machine #Learning. #ML. #Artificial #Intelligence. #AI. #Berryville #Institute. #Cyber. #Risks.
Share

Gary McGraw, Ph.D.


Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however.  ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level.  Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on two threads: building a taxonomy of known attacks on ML and the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general.  A list of the top five (of 78 known) ML security risks will be presented.  



  Date and Time

  Location

  Hosts

  Registration



  • Date: 18 Feb 2021
  • Time: 07:00 PM to 08:30 PM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Hosts
  • Co-sponsored by Northern VA/Washington Joint Section Computational Intelligence Society Chapter
  • Starts 23 January 2021 10:30 AM
  • Ends 18 February 2021 12:30 PM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • No Admission Charge


  Speakers

Gary McGraw, Ph.D. Gary McGraw, Ph.D. of Berryville Institute of Machine Learning, https://berryvilleiml.com/

Gary McGraw is co-founder of the Berryville Institute of Machine Learning. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series.  Dr. McGraw has also written over 100 peer-reviewed scientific publications. Gary serves on the Advisory Boards of Code DX, Maxmyinterest, Runsafe Security, and Secure Code Warrior.  He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). Gary produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering. 





Agenda

7:00 pm - Opening Comments and Introduction

7:10 pm - Presentation

8:10 pm - Extended Questions

8:30 pm - Close



Virtual Event.  Zoom URL will be send prior to the meeting.

A copy of Dr. McGraw's book "Software Security: Building Security In" will be given away.



  Media

Security Engineering for Machine Learning These are Dr. McGraw's slides from the 18-Feb-2021 presentation; use only with attribution to Dr. McGraw. 3.24 MiB