Secure Development of Machine Learning Against Poisoning Attacks

#WIE #ML #cybersecurity #poisoning_attack #AI #DevSecOps
Share

Recent research has revealed that machine learning models are vulnerable to adversarial attacks which seek to manipulate the model to induce undesired behavior or extract sensitive information. Such vulnerabilities are particularly concerning for the use of these methods in aerospace and defense applications where safety and security are paramount. This research evaluates an approach that combines several attack detection methods in tandem to produce an intrusion detection system (IDS) that ensures security at each stage of the model’s lifecycle. The performance of the pipelined detection approach is compared to the performance of each individual detector with the hypothesis that the Combined IDS will result in improved security. The goal of this research is to move toward a practice for secure AI development and operation.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 14 Mar 2024
  • Time: 12:00 PM to 01:00 PM
  • All times are (UTC-05:00) Central Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
  • 1100 Martin Goland Ave
  • San Antonio, Texas
  • United States 78238
  • Building: 51

  • Contact Event Host
  • Co-sponsored by WIE
  • Starts 01 March 2024 12:00 AM
  • Ends 14 March 2024 12:00 AM
  • All times are (UTC-05:00) Central Time (US & Canada)
  • No Admission Charge


  Speakers

Garrett of Southwest Research Institute (SwRI)

Topic:

Secure Development of Machine Learning Against Poisoning Attacks

Recent research has revealed that machine learning models are vulnerable to adversarial attacks which seek to manipulate the model to induce undesired behavior or extract sensitive information. Such vulnerabilities are particularly concerning for the use of these methods in aerospace and defense applications where safety and security are paramount. This research evaluates an approach that combines several attack detection methods in tandem to produce an intrusion detection system (IDS) that ensures security at each stage of the model’s lifecycle. The performance of the pipelined detection approach is compared to the performance of each individual detector with the hypothesis that the Combined IDS will result in improved security. The goal of this research is to move toward a practice for secure AI development and operation.

Biography:

Dr. Garrett Jares is a Research Engineer at Southwest Research Institute (SwRI). He completed his Ph.D. in Aerospace Engineering at Texas A&M University in 2023 and is a recipient of the 2020 NSF Graduate Research Fellowship Program (GRFP). During his graduate studies, Dr. Jares focused heavily on Aerospace Cybersecurity research. His research investigated false data injection attacks on unmanned air systems (UAS) and demonstrated the use of classical control methods to take control of a feedback control system by intercepting and modifying the measurement data. 

Dr. Jares has worked professionally in both Aerospace and Software Engineering roles and is an FAA certified Part 107 commercial UAS pilot. His experience has covered a wide range of topics including control theory, avionics, embedded systems, autonomy, cybersecurity, cryptography, data science, and software engineering. He has also performed several invited presentations for both academia and industry. At SwRI, Dr. Jares serves as a Research Engineer in the Strategic Aerospace Department in Division 16. His roles include developing software/firmware for embedded systems.  

Email:

Address:6220 Culebra Road, B168, San Antonio, United States, 78238