BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260124T050207Z
UID:6BD60B89-88D4-4FAE-B102-0A13FEE72783
DTSTART;TZID=America/Los_Angeles:20250624T170000
DTEND;TZID=America/Los_Angeles:20250624T190000
DESCRIPTION:Generic Large Language Models (GLLMs) are continually being rel
 eased with increased size and capabilities\, enhancing the capabilities of
  these tools as universal problem solvers. While the reliability of GLLMs&#39;
  responses is questionable in many situations\, these models are often aug
 mented or retrofitted with external resources for various applications\, i
 ncluding cybersecurity.\n\nThe talk will discuss major security concerns o
 f these pre-trained models: first\, GLLMs are prone to adversarial manipul
 ation\, such as model poisoning\, reverse engineering\, and side-channel c
 yberattacks. Second\, the security issues related to LLM-generated codes u
 sing open-source libraries/codelets for software development can involve s
 oftware supply chain attacks. These may result in information disclosure\,
  access to restricted resources\, privilege escalation\, and complete syst
 em takeover.\n\nThis talk will also cover the benefits and risks of using 
 GLLMs in cybersecurity\, particularly in malware detection\, log analysis\
 , intrusion detection\, etc. I will highlight the need for diverse AI appr
 oaches (non-LLM-based smaller models) trained with application-specific cu
 rated data\, fine-tuned for well-tested security functionalities in identi
 fying and mitigating emerging cyber threats\, including zero-day attacks.\
 n\nNote:\n\n- You will require a Zoom account (free to obtain) to join the
  meeting. This requirement is to avoid Zoom bombing. Please sign in using 
 the email address tied to your Zoom account\, not necessarily the one you 
 used to register for the event. Register here: https://sjsu.zoom.us/meetin
 g/register/2XuaGc9ISoCWOu1dt6ANog\n- By registering for this event\, you a
 gree that IEEE and the organizers are not liable to you for any loss\, dam
 age\, injury\, or any incidental\, indirect\, special\, consequential\, or
  economic loss or damage (including loss of opportunity\, exemplary or pun
 itive damages). The event will be recorded and will be made available for 
 public viewing.\n\nCo-sponsored by: Vishnu S. Pendyala\, SJSU\n\nSpeaker(s
 ): Dr. Vishnu S. Pendyala\, Prof. Dipankar Dasgupta\, IEEE Fellow\, NAI Fe
 llow\, AIIA Fellow\n\nVirtual: https://events.vtools.ieee.org/m/489327
LOCATION:Virtual: https://events.vtools.ieee.org/m/489327
ORGANIZER:pendyala@ieee.org
SEQUENCE:32
SUMMARY:Generic LLMs in Cybersecurity
URL;VALUE=URI:https://events.vtools.ieee.org/m/489327
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Generic Large Language Models (GLLMs) are 
 continually being released with increased size and capabilities\, enhancin
 g the capabilities of these tools as universal problem solvers. &amp;nbsp\;Whi
 le the reliability of GLLMs&#39; responses is questionable in many situations\
 , these models are often augmented or retrofitted with external resources 
 for various applications\, including cybersecurity.&lt;/p&gt;\n&lt;p&gt;&lt;br&gt;The talk w
 ill discuss major security concerns of these pre-trained models: first\, G
 LLMs are prone to adversarial manipulation\, such as model poisoning\, rev
 erse engineering\, and side-channel cyberattacks. Second\, the security is
 sues related to LLM-generated codes using open-source libraries/codelets f
 or software development can involve software supply chain attacks. These m
 ay result in information disclosure\, access to restricted resources\, pri
 vilege escalation\, and complete system takeover.&lt;/p&gt;\n&lt;p&gt;&lt;br&gt;This talk wi
 ll also cover the benefits and risks of using GLLMs in cybersecurity\, par
 ticularly in malware detection\, log analysis\, intrusion detection\, etc.
  I will highlight the need for diverse AI approaches (non-LLM-based smalle
 r models) trained with application-specific curated data\, fine-tuned for 
 well-tested security functionalities in identifying and mitigating emergin
 g cyber threats\, including zero-day attacks.&lt;/p&gt;\n&lt;p&gt;Note:&lt;/p&gt;\n&lt;ul type=
 &quot;disc&quot;&gt;\n&lt;li&gt;You will require a Zoom account (free to obtain) to join the 
 meeting. This requirement is to avoid Zoom bombing. Please sign in using t
 he email address tied to your Zoom account\, not necessarily the one you u
 sed to register for the event. Register here: &lt;a href=&quot;https://sjsu.zoom.u
 s/meeting/register/2XuaGc9ISoCWOu1dt6ANog&quot;&gt;https://sjsu.zoom.us/meeting/re
 gister/2XuaGc9ISoCWOu1dt6ANog&lt;/a&gt;&amp;nbsp\;&lt;/li&gt;\n&lt;li&gt;By registering for this
  event\, you agree that IEEE and the organizers are not liable to you for 
 any loss\, damage\, injury\, or any incidental\, indirect\, special\, cons
 equential\, or economic loss or damage (including loss of opportunity\, ex
 emplary or punitive damages). The event will be recorded and will be made 
 available for public viewing.&lt;/li&gt;\n&lt;/ul&gt;
END:VEVENT
END:VCALENDAR

