BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20210314T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20201101T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20201203T212024Z
UID:012028BF-F687-4A4D-A989-77A4D38A3D68
DTSTART;TZID=US/Eastern:20201203T150000
DTEND;TZID=US/Eastern:20201203T161500
DESCRIPTION:The IEEE West Michigan joint Computer and Robotics &amp; Automation
  society (CS/RA) would like to invite you to attend our technical webinar 
 titled “Challenges with Artificial intelligence Explainability: Justifyi
 ng Machine learning model predictions”.\n\nThe webinar is presented by D
 r. Mohamed Kalil from IBM Watson Analytics.\n\nAbstract\n\nMachine learnin
 g models are all around us these days. They help us predict weather based 
 on historic data\, they help our employers predict business metrics\, and 
 they help our phones predict the word we will type next. While ML models h
 ave achieved great success and a high degree of accuracy in many areas\, t
 hey can still be surrounded by mystery. Some models can be easily interpre
 ted\, and their predictions explained. But many models still feel like a b
 lack box. They share their output but not much more in terms of the detail
 ed reasoning.\n\nThis can have a high impact\, especially with the roles o
 f AI models getting more significant. The models are moving from supportin
 g convenience features\, to matters that impact Human life like self-drivi
 ng cars or the law like detecting and analyzing crime scenes.\n\nThis webi
 nar will discuss the significance of various ML/AI models and the latest o
 n Artificial intelligence explainability (AIX).\n\nSpeaker\n\nDr. Kalil is
  a data scientist and an AI researcher at IBM Watson analytics. He has dis
 tinguished experience in the field of Artificial Intelligence as seen in v
 arious publications apanning the topics in the areas of optimization\, bus
 iness data analytics and network performance tuning. His recent interests 
 includes large scale business analytics models and Artificial Intelligence
  model explainability.\n\nCo-sponsored by: Ferris State University - Schoo
 l of Digital Media \n\nVirtual: https://events.vtools.ieee.org/m/249051
LOCATION:Virtual: https://events.vtools.ieee.org/m/249051
ORGANIZER:mohamedabusharkh@ferris.edu
SEQUENCE:3
SUMMARY:IEEE West Michigan CS/RA Technical talk: AI Explainability Challeng
 es: Justifying ML Model Predictions
URL;VALUE=URI:https://events.vtools.ieee.org/m/249051
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;The IEEE West Michigan joint Computer and 
 Robotics &amp;amp\; Automation society (CS/RA) would like to invite you to att
 end our technical webinar&amp;nbsp\; titled&amp;nbsp\;&lt;strong&gt;&amp;ldquo\;Challenges w
 ith Artificial intelligence Explainability: Justifying Machine learning mo
 del predictions&amp;rdquo\;.&amp;nbsp\;&lt;/strong&gt;&amp;nbsp\;&lt;/p&gt;\n&lt;p&gt;The webinar&amp;nbsp\;
 is presented by Dr. Mohamed Kalil from IBM Watson Analytics.&amp;nbsp\;&lt;/p&gt;\n&lt;
 p&gt;&lt;strong&gt;Abstract&amp;nbsp\;&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Machine learning models are all
  around us these days. They help us predict weather based on historic data
 \, they help our employers predict business metrics\, and they help our ph
 ones predict the word we will type next. While ML models have achieved gre
 at success and a high degree of accuracy in many areas\, they can still be
  surrounded by mystery. Some models can be easily interpreted\, and their 
 predictions explained. But many models still feel like a black box. They s
 hare their output but not much more in terms of the detailed reasoning.&lt;/p
 &gt;\n&lt;p&gt;This can have a high impact\, especially with the roles of AI models
  getting more significant. &amp;nbsp\;The models are moving from supporting co
 nvenience features\, to matters that impact Human life like self-driving c
 ars or the law like detecting and analyzing crime scenes.&lt;/p&gt;\n&lt;p&gt;This web
 inar will discuss the significance of various ML/AI models and the latest 
 on Artificial intelligence explainability (AIX).&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Speaker&amp;n
 bsp\;&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Dr. Kalil is a data scientist and an AI researcher 
 at IBM Watson analytics. He has distinguished experience in the field of A
 rtificial Intelligence as seen in various publications apanning the topics
  in the areas of optimization\, business data analytics and network perfor
 mance tuning. His recent interests includes large scale business analytics
  models and Artificial Intelligence model explainability.&amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

