BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251015T222002Z
UID:0A7214D4-F700-4914-9DE4-C8D080FFBF72
DTSTART;TZID=America/Chicago:20251015T100000
DTEND;TZID=America/Chicago:20251015T110000
DESCRIPTION:Title: Convolutional Beamformer for Joint Denoising\, Dereverbe
 ration\, and Source Separation\n\nAbstract:\nWhen speech is captured by di
 stant microphones in everyday environments\, the signals are often contami
 nated by background noise\, reverberation\, and overlapping voices. The co
 nvolutional beamformer (CBF) is a signal processing technique that recover
 s clean\, close-microphone-quality speech from such complex mixtures. By j
 ointly performing denoising\, dereverberation\, and source separation\, CB
 F enhances both human listening experiences and automatic speech recogniti
 on (ASR) accuracy. Potential applications include hearing assistive device
 s\, meeting transcription systems\, and other real-world speech technologi
 es.\n\nThis talk begins by introducing the concept of CBF\, including its 
 formal definition\, mechanism for joint enhancement\, and optimization via
  maximum likelihood estimation. CBF is defined as a series of beamformers 
 estimated at each frequency in the short-time Fourier transform (STFT) dom
 ain and convolved with the observed signal to achieve the desired enhancem
 ent. The presentation then describes that CBF can be factorized into Multi
 channel Linear Prediction (MCLP) for dereverberation and Beamforming (BF) 
 for denoising and separation\, highlighting the practical advantages of th
 is decomposition. Related work is reviewed\, including Weighted Prediction
  Error (WPE) dereverberation\, mask-based beamforming\, and guided source 
 separation\, with emphasis on strong results in challenging tasks such as 
 the CHiME-8 distant ASR challenge.\n\nFurther extensions are presented\, i
 ncluding blind CBF for unknown recording conditions\, switching CBF for en
 hanced performance with a limited number of microphones\, and integration 
 with neural networks - notably the DiffCBF framework\, which combines CBF 
 with diffusion-based speech enhancement models. Experimental results demon
 strate state-of-the-art speech quality\, even with relatively few micropho
 nes and limited training data.\n\nCo-sponsored by: Starkey\n\nSpeaker(s): 
 Dr. Tomohiro Nakatani\n\nAgenda: \n9:30 – 10:00 a.m. Meet and Greet\n\n1
 0:00 – 10:05 a.m. Welcome Remarks by Dr. Masahiro Sunohara\n\n10:05 – 
 11:00 a.m. Convolutional Beamformer for Joint Denoising\, Dereverberation\
 , and Source Separation by Dr. Tomohiro Nakatani\n\nRoom: Excelsior Room\,
  Bldg: William F Austin Center\, 6425 Flying Cloud Dr\, Eden Prairie\, Min
 nesota\, United States\, 55344\, Virtual: https://events.vtools.ieee.org/m
 /501955
LOCATION:Room: Excelsior Room\, Bldg: William F Austin Center\, 6425 Flying
  Cloud Dr\, Eden Prairie\, Minnesota\, United States\, 55344\, Virtual: ht
 tps://events.vtools.ieee.org/m/501955
ORGANIZER:masahiro_sunohara@starkey.com
SEQUENCE:64
SUMMARY:IEEE SPS DISTINGUISHED INDUSTRY SPEAKER PROGRAM TWIN CITIES SP/COM 
 CHAPTER SEMINAR 10/15/2025
URL;VALUE=URI:https://events.vtools.ieee.org/m/501955
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Title:&amp;nbsp\;&lt;span style=&quot;text-decoration:
  underline\;&quot;&gt;&lt;strong&gt;Convolutional Beamformer for Joint Denoising\, Derev
 erberation\, and Source Separation&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;\n&lt;p&gt;Abstract:&lt;br&gt;Wh
 en speech is captured by distant microphones in everyday environments\, th
 e signals are often contaminated by background noise\, reverberation\, and
  overlapping voices. The convolutional beamformer (CBF) is a signal proces
 sing technique that recovers clean\, close-microphone-quality speech from 
 such complex mixtures. By jointly performing denoising\, dereverberation\,
  and source separation\, CBF enhances both human listening experiences and
  automatic speech recognition (ASR) accuracy. Potential applications inclu
 de hearing assistive devices\, meeting transcription systems\, and other r
 eal-world speech technologies.&lt;/p&gt;\n&lt;p&gt;This talk begins by introducing the
  concept of CBF\, including its formal definition\, mechanism for joint en
 hancement\, and optimization via maximum likelihood estimation. CBF is def
 ined as a series of beamformers estimated at each frequency in the short-t
 ime Fourier transform (STFT) domain and convolved with the observed signal
  to achieve the desired enhancement. The presentation then describes that 
 CBF can be factorized into Multichannel Linear Prediction (MCLP) for derev
 erberation and Beamforming (BF) for denoising and separation\, highlightin
 g the practical advantages of this decomposition. Related work is reviewed
 \, including Weighted Prediction Error (WPE) dereverberation\, mask-based 
 beamforming\, and guided source separation\, with emphasis on strong resul
 ts in challenging tasks such as the CHiME-8 distant ASR challenge.&lt;/p&gt;\n&lt;p
 &gt;Further extensions are presented\, including blind CBF for unknown record
 ing conditions\, switching CBF for enhanced performance with a limited num
 ber of microphones\, and integration with neural networks - notably the Di
 ffCBF framework\, which combines CBF with diffusion-based speech enhanceme
 nt models. Experimental results demonstrate state-of-the-art speech qualit
 y\, even with relatively few microphones and limited training data.&lt;/p&gt;&lt;br
  /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;9:30 &amp;ndash\; 10:00 a.m.&amp;nbsp\; &amp;nbsp\; &amp;nbsp\; 
 &amp;nbsp\;&lt;strong&gt;Meet and Greet&amp;nbsp\;&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;10:00 &amp;ndash\; 10:05
  a.m.&amp;nbsp\; &amp;nbsp\; &amp;nbsp\;&lt;strong&gt;Welcome Remarks&lt;/strong&gt;&amp;nbsp\;by&amp;nbsp
 \;&lt;em&gt;Dr. Masahiro Sunohara&lt;/em&gt;&lt;/p&gt;\n&lt;p&gt;10:05 &amp;ndash\; 11:00 a.m.&amp;nbsp\; 
 &amp;nbsp\; &amp;nbsp\;&lt;strong&gt;Convolutional Beamformer for Joint Denoising\, Dere
 verberation\, and Source Separation&lt;/strong&gt; by Dr. Tomohiro Nakatani&lt;/p&gt;
END:VEVENT
END:VCALENDAR

