OCCS GET Series: Deploying AI Systems in Healthcare & Real-World MLOps at Scale
We’re excited to continue the Orange County Computer Society (OCCS) Global Emerging Technologies (GET) Series—a monthly platform dedicated to spotlighting groundbreaking innovations in computer science and technology. Hosted by the IEEE Orange County Computer Society Chapter, this series brings together professionals, students, and tech enthusiasts to explore what’s next in emerging tech.
Following an insightful June session focused on enterprise AI integration and large language model (LLM) development, we’re back this August after a brief summer break with another dynamic double-feature exploring AI at scale—from healthcare to production-ready ML systems.
As AI adoption accelerates, so do the complexities of real-world deployment. This month’s talks tackle those complexities head-on, offering both strategic and technical perspectives on building reliable, scalable, and human-centered AI solutions.
In this session:
🔹 The first talk delves into the deployment of agentic AI systems in healthcare, addressing challenges such as data heterogeneity, safety, regulation, and trust. You’ll learn about explanation-based and modular design principles, hybrid RAG-based deployment strategies, and real-world applications across triage, radiology, and dementia care.
🔹 The second talk bridges the gap between ML experimentation and production. Through a practical case study, it explores the full machine learning lifecycle—from training to CI/CD-enabled deployment—along with best practices in MLOps, model monitoring, and cross-functional collaboration.
Key topics include:
✅ Designing safe, explainable agentic AI in healthcare
✅ Deployment frameworks using hybrid RAG models
✅ ML development lifecycle: from notebooks to production
✅ CI/CD pipelines, monitoring, and model versioning
✅ Scalable and collaborative MLOps strategies
Whether you're building AI for regulated industries or scaling ML pipelines across teams, this session will provide the tools, frameworks, and real-world insights to drive your work forward.
📅 Join us on Tuesday, August 26 from 5:00 PM to 7:00 PM PT for an evening of learning, discussion, and community.
🎤 Interested in speaking at a future session? Reach out to swapnali.karvekar@ieee.org — we’re always looking for passionate voices shaping the future of technology.
Let’s keep advancing innovation—together.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Speakers
Vivek of Roku, Inc.
From Notebooks to Production: ML in Practice
In the world of AI and machine learning, most discussions stop at model development — but that’s only half the story. Based on real-world experience, I’ve seen a persistent gap between building ML models and deploying them into production, where they can deliver actual value. This talk, “From Notebooks to Production: ML in Practice,” aims to bridge that gap.
We’ll start with a practical case study—classifying spam emails—to walk through the end-to-end ML lifecycle, including data preparation, training, evaluation, deployment, and monitoring. Along the way, we’ll explore the real challenges teams face when transitioning from experimentation to production, such as concept drift, scalability, and cross-functional collaboration.
The session dives into MLOps practices and tools that enable production-grade AI systems, including CI/CD pipelines, model versioning, monitoring, and agile team workflows. You’ll learn how to structure collaborative ML workflows, operationalize models reliably, and scale AI systems without falling into common traps.
This session is ideal for ML engineers, data scientists, software developers, and in-fact anyone interested in practical, field-tested approaches to building and deploying machine learning systems at scale.
Biography:
Vivek Bharti is a Senior Machine Learning Engineer at Roku, where he works across the entire machine learning lifecycle — from developing advanced Deep Learning and ML models in the Natural Language Understanding (NLU) domain to deploying scalable solutions that serve millions of users.
With 8 years of experience, Vivek has led end-to-end ML projects involving predictive modeling, NLP, computer vision (CV), and broader AI applications. His expertise spans model development, model deployment, monitoring, and continuous improvement. He is currently working on multiple patent submissions related to full-stack ML systems, covering data pipelines, model development, deployment, and maintenance.
Vivek recently delivered a guest lecture at NYU on the machine learning lifecycle and is an invited speaker at several IEEE chapters. He is passionate about mentoring and sharing practical ML insights with the broader AI/ML community
Email:
Babul of SCAN Health Plan
Scaling Agentic AI in Healthcare: Challenges, Design, Principles, and Deployment Strategies
Autonomous agent AI systems, which are agentic AI systems that have the ability to operate autonomously, are transforming healthcare in streamlining clinical and administrative operations. Yet, calling such systems to real healthcare settings comes with problems concerning data heterogeneity, safety, regulation, and trust. As this paper analyses these barriers, it also suggests explanation-based design principles, modularity-based design principles, and human-based oversight-based design principles. we explore the approach of deployment (via hybrid RAG models, real-time orchestration and compliance layer). We use case study analysis to assess system performance: in triage, radiology, ADE detection and in dementia-care. Our results provide practical implications and a bespoke platform through which agentic AI may be securely and saleable implemented into significant healthcare processes.
Biography:
Babul Sahu is a Lead AI Engineer with over 20 years of experience in software development, including 5 years in machine learning and more than 2 years in Generative and Agentic AI. He leads all AI initiatives at SCAN Health Plan, where he built the organization’s first AI team and established its enterprise AI strategy. Babul has designed and deployed secure, production-grade AI systems—ranging from LLM-powered automation platforms to multi-agent frameworks using the framework LangGraph and CrewAI. His work includes hybrid RAG pipelines, healthcare-specific NLP, and MLOps on Azure. He is passionate about responsible AI, AI infrastructure at scale, and applying AI to solve real-world healthcare challenges.
Email:
Agenda
Time (in PST) | Activity |
05:00pm - 05:15pm | Check-in and networking |
05:15pm - 05:30pm | OCCS Chapter Introduction! |
05:30pm - 06:00pm | Speaker: Vivek Bharti |
06:00pm - 06:30pm | Speaker: Babul Sahu |
06:30pm - 07:00pm | Q/A |