Improving Fairness in Speaker Recognition and Speech Recognition

#signal #processing #lecture #atlanta #distinguished #speechrecognition #speakerrecognition
Share

Abstract:

Group fairness, or avoidance of large performance disparities for different cohorts of users, is a major concern as AI technologies find adoption in ever more application scenarios. In this talk I will present some recent work on fairness for speech-based technologies, specifically, speaker recognition and speech recognition.   For speaker recognition, I report on two algorithmic approaches to reduce performance variability across different groups.  In the first method, group-adapted fusion, we combine sub-models that are specialized for subpopulations that have very different representation (and therefore performance) in the data. The second method, adversarial reweighting, forces the model to focus on those portions of the population that are harder to recognize, without requiring a priori labels for speaker groups. For automatic speech recognition, I present methods for detecting and mitigating accuracy disparities as a function of geographic or demographic variables, principally by oversampling or adaptation based on group membership. The talk concludes with an application of synthetic speech generation (TTS) for filling in data gaps for a group of speakers with atypical speech, namely, stutter.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 31 Mar 2023
  • Time: 03:00 PM to 04:00 PM
  • All times are (UTC-04:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Technology Square 75 5th St NW
  • Atlanta, Georgia
  • United States 30308
  • Building: Centergy One Bldg.
  • Room Number: CSIP Library

  • Contact Event Host
  • In person event will be at:

    Centergy One Bldg. CSIP Library, 5th Floor, Technology Square 75 5th St NW (Atlanta, GA 30308)

  • Co-sponsored by Georgia Tech Center for Information and Signal Processing
  • Starts 24 March 2023 04:00 PM
  • Ends 31 March 2023 04:00 PM
  • All times are (UTC-04:00) Eastern Time (US & Canada)
  • No Admission Charge


  Speakers

Dr. Andreas Stolcke of Alexa Speech organization at Amazon

Topic:

Improving Fairness in Speaker Recognition and Speech Recognition

Abstract:

Group fairness, or avoidance of large performance disparities for different cohorts of users, is a major concern as AI technologies find adoption in ever more application scenarios. In this talk I will present some recent work on fairness for speech-based technologies, specifically, speaker recognition and speech recognition.   For speaker recognition, I report on two algorithmic approaches to reduce performance variability across different groups.  In the first method, group-adapted fusion, we combine sub-models that are specialized for subpopulations that have very different representation (and therefore performance) in the data. The second method, adversarial reweighting, forces the model to focus on those portions of the population that are harder to recognize, without requiring a priori labels for speaker groups. For automatic speech recognition, I present methods for detecting and mitigating accuracy disparities as a function of geographic or demographic variables, principally by oversampling or adaptation based on group membership. The talk concludes with an application of synthetic speech generation (TTS) for filling in data gaps for a group of speakers with atypical speech, namely, stutter.

Related Publications:

Improving fairness in speaker verification via group-adapted fusion network

Adversarial reweighting for speaker verification fairness

Reducing geographic disparities in automatic speech recognition via elastic weight consolidation

Toward fairness in speech recognition: Discovery and mitigation of performance disparities

Stutter-TTS: Controlled synthesis and improved recognition of stuttered speech

 

 

Biography:

Andreas Stolcke is senior principal scientist in the Alexa Speech organization at Amazon. He obtained his PhD from UC Berkeley and then worked as a researcher at SRI International and Microsoft, before joining Amazon.  His research interests include computational linguistics, language modeling, speech recognition, speaker recognition and diarization, and paralinguistics, with over 300 papers and patents in these areas.  His open-source SRI Language Modeling Toolkit was widely used in academia (before becoming obsolete by virtue of deep neural network models).  Andreas is a Fellow of the IEEE and the International Speech Communication Association, and giving this talk as an IEEE Distinguished Industry Speaker.