Security, Privacy and Trust in AI

#AI #SECURITY #PRIVACY
Share

As AI advances, cybersecurity has also become an important topic to address. Recently, large language models (LLMs) have received widespread attention due to their good performance in a variety of applications. However, the reliability of these models is limited and risks remain. This talk explains the principles of safety assessment of LLM models and discusses ways to enhance model safety, such as hallucination mitigation. Next, we will address the benefits of LLM in terms of hardware security by exploring bug detection and bug fixing issues using LLM. Finally, we discuss several success stories in cybersecurity for AI and discuss issues that need to be addressed further. We conclude this talk with an outlook on using LLM in deep learning-based side channel analysis.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar
  • Seongnam, Gyeonggi-do
  • South Korea 13150
  • Building: AI Hall
  • Room Number: 511

  • Contact Event Hosts
  • Co-sponsored by IEEE Seoul Section Sensors Council Chapter
  • Starts 23 June 2024 03:00 PM UTC
  • Ends 10 July 2024 03:00 PM UTC
  • No Admission Charge






Agenda

2:30 – 3:15

Risk Assessment, Safety Alignment, and Guardrails for Generative Models

Prof. Xiaoning Liu

3:15 – 4:00

Bugs Begin, Bugs Begone: Large Language Models and Hardware Security

Dr. Hammond Pearce

4:00 – 4:45

AI and Cybersecurity: A Perfect Match...or not?

Prof. Stjepan Picek