Security, Privacy and Trust in AI
As AI advances, cybersecurity has also become an important topic to address. Recently, large language models (LLMs) have received widespread attention due to their good performance in a variety of applications. However, the reliability of these models is limited and risks remain. This talk explains the principles of safety assessment of LLM models and discusses ways to enhance model safety, such as hallucination mitigation. Next, we will address the benefits of LLM in terms of hardware security by exploring bug detection and bug fixing issues using LLM. Finally, we discuss several success stories in cybersecurity for AI and discuss issues that need to be addressed further. We conclude this talk with an outlook on using LLM in deep learning-based side channel analysis.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- Contact Event Hosts
- Co-sponsored by IEEE Seoul Section Sensors Council Chapter
Agenda
|
2:30 – 3:15 |
Risk Assessment, Safety Alignment, and Guardrails for Generative Models |
Prof. Xiaoning Liu |
|
3:15 – 4:00 |
Bugs Begin, Bugs Begone: Large Language Models and Hardware Security |
Dr. Hammond Pearce |
|
4:00 – 4:45 |
AI and Cybersecurity: A Perfect Match...or not? |
Prof. Stjepan Picek |