BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20260308T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20261101T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260424T163504Z
UID:D1388570-6B26-49BE-9981-9CF403F615B3
DTSTART;TZID=America/Chicago:20260417T120000
DTEND;TZID=America/Chicago:20260417T130000
DESCRIPTION:Large Language Models (LLMs) are increasingly used by developer
 s and students to generate code\, offering significant gains in productivi
 ty and accessibility. However\, LLM-generated code often introduces subtle
  yet critical security vulnerabilities\, particularly in domains such as c
 ryptography and secure software development. This webinar presents a pract
 ical and systematic approach to secure code generation using LLMs\, focusi
 ng on transforming raw model outputs into verifiably safe implementations.
 \nThe session introduces a structured workflow\, Prompt → Harden → Ver
 ify\, that guides participants through crafting security-aware prompts wit
 h explicit constraints\, enforcing correctness through unit testing\, and 
 applying lightweight automated security checks prior to deployment. Throug
 h hands-on demonstrations\, attendees will learn how to integrate secure c
 oding practices directly into the LLM-assisted development pipeline.\nBy t
 he end of the webinar\, participants will gain a repeatable methodology fo
 r reducing vulnerabilities in AI-generated code\, along with ready-to-use 
 templates and a security validation checklist. This work aims to bridge th
 e gap between AI-assisted programming and secure software engineering\, en
 abling practitioners to harness LLM capabilities while maintaining strong 
 security guarantees.\n\nSpeaker(s): Dr. Mahmoud Abouyoussef\, \n\nVirtual:
  https://events.vtools.ieee.org/m/553201
LOCATION:Virtual: https://events.vtools.ieee.org/m/553201
ORGANIZER:shuvalaxmi.dass@louisiana.edu
SEQUENCE:27
SUMMARY:Building Secure Code with LLMs: A Hands-On Prompt-to-Verification W
 orkflow
URL;VALUE=URI:https://events.vtools.ieee.org/m/553201
X-ALT-DESC:Description: &lt;br /&gt;&lt;div&gt;Large Language Models (LLMs) are increas
 ingly used by developers and students to generate code\, offering signific
 ant gains in productivity and accessibility. However\, LLM-generated code 
 often introduces subtle yet critical security vulnerabilities\, particular
 ly in domains such as cryptography and secure software development. This w
 ebinar presents a practical and systematic approach to secure code generat
 ion using LLMs\, focusing on transforming raw model outputs into verifiabl
 y safe implementations.&lt;/div&gt;\n&lt;div&gt;The session introduces a structured wo
 rkflow\, &lt;em&gt;Prompt &amp;rarr\; Harden &amp;rarr\; Verify\, &lt;/em&gt;that guides parti
 cipants through crafting security-aware prompts with explicit constraints\
 , enforcing correctness through unit testing\, and applying lightweight au
 tomated security checks prior to deployment. Through hands-on demonstratio
 ns\, attendees will learn how to integrate secure coding practices directl
 y into the LLM-assisted development pipeline.&lt;/div&gt;\n&lt;div&gt;By the end of th
 e webinar\, participants will gain a repeatable methodology for reducing v
 ulnerabilities in AI-generated code\, along with ready-to-use templates an
 d a security validation checklist. This work aims to bridge the gap betwee
 n AI-assisted programming and secure software engineering\, enabling pract
 itioners to harness LLM capabilities while maintaining strong security gua
 rantees.&lt;/div&gt;
END:VEVENT
END:VCALENDAR

