Using Human Perception to Inform Machine Perception

#HumanParception #MachinePerception #MachineLearning #ComputerVision #Explainability #DeepLearning #ExplainableAttribute #FaceRecognition #IEEECS #WIE
Share

IEEE Computer Society San Diego Chapter - 2023 Invited Seminar Series: Lecture 3


Modern machine learning has origins in human learning, taking cues from human perception to build, train and evaluate machine learning models. As machine learning (ML) has begun to outperform humans in many challenging tasks, the focus has shifted from modeling humans to simply improving the performance of these ML models. We focus instead on what can be learned from human perception to improve these models and make them more transparent and understandable. With many applications of machine learning having real-world impacts on humans, we consider explainability essential for these models. In this talk, I will detail our approaches to explainable attribute recognition, prominent feature recognition, and face recognition. With each problem, we will highlight our influences from human perception.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 28 Mar 2023
  • Time: 05:30 PM to 06:30 PM
  • All times are (UTC-08:00) Pacific Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Contact Event Hosts
  • charliebird@computer.org

  • Starts 13 March 2023 08:00 AM
  • Ends 28 March 2023 06:30 PM
  • All times are (UTC-08:00) Pacific Time (US & Canada)
  • No Admission Charge


  Speakers

Emily Hand, Ph.D. Emily Hand, Ph.D. of University of Nevada, Reno

Topic:

Using Human Perception to Inform Machine Perception

Modern machine learning has origins in human learning, taking cues from human perception to build, train and evaluate machine learning models. As machine learning (ML) has begun to outperform humans in many challenging tasks, the focus has shifted from modeling humans to simply improving the performance of these ML models. We focus instead on what can be learned from human perception to improve these models and make them more transparent and understandable. With many applications of machine learning having real-world impacts on humans, we consider explainability essential for these models. In this talk, I will detail our approaches to explainable attribute recognition, prominent feature recognition, and face recognition. With each problem, we will highlight our influences from human perception.

Biography:

Dr. Emily Hand is an Assistant Professor in Computer Science and Engineering at the University of Nevada, Reno. She is the director of the Machine Perception Lab (MPL). The MPL has active projects in computer vision, natural language processing, and machine learning more generally. The mission of the MPL is to improve social interactions for individuals with social skills deficits through the automated processing of visual and audio information in social situations. Dr. Hand’s research is funded by several active grants through the National Science Foundation. She is also very passionate about teaching and is a part of several NSF REU and RET programs at UNR. 

Email:





Questions: Contact Upal Mahbub at upalmahbub@yahoo.com