BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230329T170709Z
UID:8FF17434-8CF1-4CC5-841C-F8D354215244
DTSTART;TZID=America/Los_Angeles:20230328T173000
DTEND;TZID=America/Los_Angeles:20230328T183000
DESCRIPTION:Modern machine learning has origins in human learning\, taking 
 cues from human perception to build\, train and evaluate machine learning 
 models. As machine learning (ML) has begun to outperform humans in many ch
 allenging tasks\, the focus has shifted from modeling humans to simply imp
 roving the performance of these ML models. We focus instead on what can be
  learned from human perception to improve these models and make them more 
 transparent and understandable. With many applications of machine learning
  having real-world impacts on humans\, we consider explainability essentia
 l for these models. In this talk\, I will detail our approaches to explain
 able attribute recognition\, prominent feature recognition\, and face reco
 gnition. With each problem\, we will highlight our influences from human p
 erception.\n\nSpeaker(s): Emily Hand\, Ph.D.\, \n\nVirtual: https://events
 .vtools.ieee.org/m/352153
LOCATION:Virtual: https://events.vtools.ieee.org/m/352153
ORGANIZER:upalmahbub@yahoo.com
SEQUENCE:5
SUMMARY:Using Human Perception to Inform Machine Perception
URL;VALUE=URI:https://events.vtools.ieee.org/m/352153
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;span style=&quot;font-weight: 400\;&quot;&gt;Modern ma
 chine learning has origins in human learning\, taking cues from human perc
 eption to build\, train and evaluate machine learning models. As machine l
 earning (ML) has begun to outperform humans in many challenging tasks\, th
 e focus has shifted from modeling humans to simply improving the performan
 ce of these ML models. We focus instead on what can be learned from human 
 perception to improve these models and make them more transparent and unde
 rstandable. With many applications of machine learning having real-world i
 mpacts on humans\, we consider explainability essential for these models. 
 In this talk\, I will detail our approaches to explainable attribute recog
 nition\, prominent feature recognition\, and face recognition. With each p
 roblem\, we will highlight our influences from human perception.&lt;/span&gt;&lt;/p
 &gt;
END:VEVENT
END:VCALENDAR

