2026 GET-AI SERIES: 2 . Trust in AI Systems: Detecting, Defending, and Securing Intelligent Agents

#GenAI #AISimplified #SecuringAI #SecureAIAgent
Share

Join us virtually or in person—though we highly recommend attending in person to get the most out of the session.


We are excited to continue the Orange County Computer Society (OCCS) Global Emerging Technologies and Artificial Intelligent Series  (GET-AI) Series—a monthly platform dedicated to spotlighting transformative innovations in computer science and technology. Hosted by the IEEE Orange County Computer Society Chapter, this series brings together professionals, students, and tech enthusiasts to explore the cutting edge of what’s possible.

Following a highly engaging April session on Generative AI, where we explored core concepts such as LLMs, RAG, Agents, MCP, and hands-on AI application development, we’re excited to bring you our May Tech Talk focused on “Security in AI.”

As AI systems evolve—from traditional detection models to LLM-powered agents interacting with real-world tools—they introduce powerful capabilities along with entirely new security risks. This session brings together cutting-edge research and practical demonstrations to explore how we can build secure, trustworthy AI systems at scale.

This double-feature session combines deep technical insights with real-world security demonstrations, designed for architects, developers, researchers, and security leaders.

Session 1: Intelligent Attack Detection & Provenance in Modern Systems (45 mins)

Modern enterprises generate massive, fragmented logs—making it difficult to move from isolated alerts to meaningful security insights.

In this session, we explore how AI is transforming attack detection and forensic analysis:

  • Graph-Based Intrusion Detection
    Learn how unsupervised graph representation learning uncovers complex, multi-step attacks hidden in network activity.
  • Federated Learning for Cross-Organization Security
    Discover how organizations can collaborate on threat detection without sharing sensitive data, preserving privacy while improving accuracy.
  • LLM-Powered Security Intelligence
    See how Large Language Models (LLMs) convert low-level alerts into high-level, actionable insights, enabling faster and smarter response.

👉 Takeaway: Move from fragmented alerts to intelligent, end-to-end attack understanding (attack provenance).

Session 2: Securing AI Agents — MCP Threats & Defense Strategies (45 mins)

As AI agents integrate with tools, APIs, and external systems, they introduce new and largely uncharted attack surfaces.

This session includes a live, end-to-end demonstration of how AI agents can be compromised—and how to secure them.

  • Understanding MCP-Style Architectures
    Explore how agents dynamically invoke tools—and why this blurs the line between trusted instructions and untrusted data.
  • Live Demo: Tool Poisoning & Agent Manipulation
    Watch how adversarial inputs embedded in tool metadata or responses can:
    • Manipulate agent behavior
    • Trigger unintended actions
    • Lead to data exfiltration
  • Layered Security Framework for AI Agents
    Learn practical defenses, including:
    • Tool authentication
    • Response sanitization
    • Schema validation
    • Context isolation
  • Real-Time Evaluation
    See how these defenses prevent attacks without impacting performance.

👉 Takeaway: Gain actionable strategies to secure AI agents in real-world enterprise environments.

Pradyumna Kodgi
Principal Product Manager | Oracle Corporation (Oracle Health & AI)
IEEE Senior Member | Vice Chair, IEEE Engineering in Medicine and Biology Society – Orange County
Member, IEEE AI Agentic Systems & AI Policy Committees

📍 California, USA
📧 pkodgi@ieee.org
🔗 LinkedIn: linkedin.com/in/pkodgi



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar

Loading virtual attendance info...

  • 5270 California Ave
  • Irvine, California
  • United States 92617

  • Contact Event Hosts
  • Co-sponsored by Pradyumna Kodgi
  • Starts 30 April 2026 07:00 AM UTC
  • Ends 26 May 2026 07:00 PM UTC
  • No Admission Charge


  Speakers

Zhou

Topic:

From Alerts to Intelligence: Rethinking Attack Provenance with Graphs, Federated Learning, and LLMs

Attack provenance aims to reconstruct and understand the sequence of events that lead to security incidents, enabling analysts to move from isolated alerts to actionable intelligence. However, modern enterprise environments generate massive and heterogeneous logs, challenging the real-world deployment of attack provenance. In this talk, I will discuss recent advances in the field of attack provenance, focusing on three complementary directions. First, I will present our work that adapts unsupervised graph representation learning to detect complex intrusions on network logs. Second, I will introduce a federated learning framework that enables cross-organization intrusion detection while preserving data privacy. Finally, I will describe a new Large language model (LLM) aided host intrusion detection framework that transforms low-level alerts into higher-level security intelligence. Together, these works illustrate an emerging trend toward combining graph learning, privacy-preserving collaboration, and LLM reasoning to enable scalable and intelligent attack provenance.

Biography:

Zhou Li is an Associate Professor at UC Irvine, EECS department, leading the Data-driven Security and Privacy Lab. Before joining UC Irvine, he worked as Principal Research Scientist at RSA Labs from 2014 to 2018. He received the NSF CAREER award, Amazon Research Award, IRTF Applied Networking Research Prize, IEEE Big Data Security Junior Research Award, etc. He has published over 80 papers and received distinguished paper award at NDSS'26. He also served various conferences and journals, with roles including Associate Editor at TDSC and Co-chair of NDSS'26 PRISM workshop.

Email:

Sreekanth

Topic:

Securing AI Agents in MCP Architectures: Defending Against Tool-Poisoning and Adversarial Attacks

AI agents built on large language models (LLMs) are increasingly integrated with external tools and data sources, unlocking powerful capabilities but introducing new security risks. In MCP-style architectures, where agents dynamically invoke tools and process external responses, the boundary between trusted instructions and untrusted data becomes blurred, creating novel attack surfaces.

In this talk, I present a practical, end-to-end demonstration of how AI agents can be compromised through tool-poisoning attacks and malicious tool responses, leading to unintended behavior and data exfiltration. Building on these insights, I introduce a layered security framework incorporating tool authentication, response sanitization, schema validation, and context isolation.

These results highlight the need for standardized security practices and demonstrate how robust design can enable secure, reliable deployment of AI-driven autonomous systems in enterprise environments

Biography:




Sreekanth Reddy Panyam is a Security Engineer with 10 years experience in Application and Cloud security. He is passionate about helping organizations strengthen their security posture and protect their valuable assets from cyber threats and developed a deep understanding of various security frameworks, compliance standards, and best practices.

Throughout his career, He helped numerous clients across different industries to identify vulnerabilities and mitigate risks in their IT systems. In his current role, Sreekanth works closely with Service teams to assess their security needs, design and implement solutions, and provide ongoing support and guidance. He enjoys the challenge of staying up-to-date with the latest security trends and technologies and finding creative solutions to complex security challenges.

Email:






Agenda

Securing AI: From Innovation to Resilience

AI is rapidly transforming how we build intelligent systems—but as capabilities grow, so do security risks. From LLM-powered agents to tool-integrated architectures, the question is no longer just what AI can do—but how do we secure it?

In this interactive session, we cut through the noise and break down AI security in practical, real-world terms—so you can understand not just the risks, but how to defend against them.


🔍 What You’ll Explore

  • How modern AI systems (LLMs, agents, MCP) introduce new attack surfaces
  • The shift from traditional security to AI-driven threat models
  • Key security concepts—explained clearly and practically
  • Real-world attack scenarios and emerging threat patterns


💡 What Makes This Session Different

This isn’t just theory—you’ll see AI systems under attack and defense in action.

Through a live, end-to-end demonstration, we’ll show how AI agents can be manipulated—and how layered security approaches can prevent these attacks in real time.


🛠️ Practical Takeaways

You’ll walk away with actionable strategies and frameworks you can apply immediately, including:

  • Securing AI agents interacting with external tools
  • Validating and sanitizing untrusted inputs
  • Designing trust boundaries in AI-driven architectures


🎯 Who Should Attend

  • Security professionals and architects working with AI systems
  • Engineers and developers building AI/LLM-based applications
  • Product managers and leaders driving AI adoption
  • Anyone interested in understanding AI risks and defenses


What You’ll Walk Away With

  • A clear understanding of emerging AI security risks
  • Practical knowledge of how to secure AI agents and systems
  • Real-world insights into attack prevention and defense strategies


As AI systems become more autonomous and integrated into enterprise workflows, security becomes foundational—not optional. This session will equip you with the mindset and tools to build AI systems you can trust.



We will serve dinner at the event.