BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Pacific
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241103T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241231T101918Z
UID:2EAE987B-8161-442C-A368-C952B95B8DEA
DTSTART;TZID=US/Pacific:20241230T190000
DTEND;TZID=US/Pacific:20241230T210000
DESCRIPTION:[]\n\nFree Registration (with a Zoom account\; you can get one 
 for free if you don&#39;t already have it):\n\nhttps://sjsu.zoom.us/meeting/re
 gister/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqr\n\nSynopsis:\n\nFor a long tim
 e\, the AI/ML community relied on traditional evaluation metrics such as t
 he confusion matrix\, accuracy\, precision\, and recall for assessing the 
 performance of machine learning models. However\, the rapidly evolving fie
 ld has been raising several ethical concerns\, which calls for a more comp
 rehensive evaluation scheme. In easy-to-understand language\, this talk wi
 ll delve into the quantitative analysis of model performance\, emphasizing
  the critical importance of explainability. As ML models become increasing
 ly complex and pervasive\, understanding their decision-making processes i
 s paramount. We&#39;ll explore various performance metrics\, their limitations
 \, and the growing need for transparency. Topics covered include Cohen’s
  Kappa Statistic\, Matthew&#39;s correlation coefficient (MCC)\, Confusion Mat
 rix\, Precision\, Recall\, G-measure\, ROC Curve\, Youden&#39;s J statistic\, 
 Type II Adversarial attack\, R-squared\, LIME\, SHAP\, and more.\n\nSpeake
 r(s): Dr. Vishnu S. Pendyala\n\nVirtual: https://events.vtools.ieee.org/m/
 442073
LOCATION:Virtual: https://events.vtools.ieee.org/m/442073
ORGANIZER:pendyala@ieee.org
SEQUENCE:33
SUMMARY:Quantitative Analysis of Machine Learning Model Performance and the
  need to consider explainability in it
URL;VALUE=URI:https://events.vtools.ieee.org/m/442073
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&amp;nbsp\;&lt;/p&gt;\n&lt;p&gt;&lt;img style=&quot;float: right\;
 &quot; src=&quot;https://events.vtools.ieee.org/vtools_ui/media/display/8943f02d-d79
 f-4d57-aff7-f1f25a6efa8d&quot; alt=&quot;&quot; width=&quot;418&quot; height=&quot;203&quot;&gt;&lt;/p&gt;\n&lt;p&gt;Free Re
 gistration (with a Zoom account\; you can get one for free if you don&#39;t al
 ready have it):&amp;nbsp\;&lt;/p&gt;\n&lt;p&gt;&lt;a href=&quot;https://sjsu.zoom.us/meeting/regis
 ter/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqr&quot;&gt;https://sjsu.zoom.us/meeting/reg
 ister/tZcsc-CoqjwpG9aPDHfg6Axqvn90i4uQRmqr&lt;/a&gt;&amp;nbsp\;&lt;/p&gt;\n&lt;p&gt;&lt;em&gt;&lt;strong&gt;
 Synopsis:&lt;br&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;\n&lt;p class=&quot;MsoNormal&quot;&gt;For a long time\, th
 e AI/ML community relied on traditional evaluation metrics such as the con
 fusion matrix\, accuracy\, precision\, and recall for assessing the perfor
 mance of machine learning models. However\, the rapidly evolving field has
  been raising several ethical concerns\, which calls for a more comprehens
 ive evaluation scheme. In easy-to-understand language\, this talk will del
 ve into the quantitative analysis of model performance\, emphasizing the c
 ritical importance of explainability. As ML models become increasingly com
 plex and pervasive\, understanding their decision-making processes is para
 mount. We&#39;ll explore various performance metrics\, their limitations\, and
  the growing need for transparency. Topics covered include Cohen&amp;rsquo\;s 
 Kappa Statistic\, Matthew&#39;s correlation coefficient (MCC)\, Confusion Matr
 ix\, Precision\, Recall\, G-measure\, ROC Curve\, Youden&#39;s J statistic\, T
 ype II Adversarial attack\, R-squared\, LIME\, SHAP\, and more.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

