Robust Score-Based Hypothesis Testing, Quickest Change Detection

#Hypothesis #Testing #Quickest #Change #Detection
Share

The Montreal Chapters of the IEEE Control Systems (CS) and Systems, Man & Cybernetics (SMC) cordially invite you to attend the following in-person talk, to be given by Dr. Vahid Tarokh, Rhodes Family Professor of Electrical and Computer Engineering, Duke University.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 06 Oct 2023
  • Time: 11:00 AM to 12:00 PM
  • All times are (GMT-05:00) America/Montreal
  • Add_To_Calendar_icon Add Event to Calendar
  • Concordia University
  • Montreal, Quebec
  • Canada H3G 1M8
  • Building: EV Building
  • Room Number: EV002.309

  • Contact Event Hosts
  • Co-sponsored by Concordia University


  Speakers

Dr. Vahid Tarokh Dr. Vahid Tarokh

Topic:

Robust Score-Based Hypothesis Testing, Quickest Change Detection

Log-likelihood-based hypothesis testing is a classical textbook result often taught in detection and estimation, signal processing, control, communications, radar and other related courses. The celebrated Neyman-Pearson Lemma proves that this test is Universally Most Powerful for testing a null hypothesis versus an alternative one. Similarly, the CUSUM change detection is based on log-likelihood ratio and its optimality has been proved by Moustakides. In order to apply these optimal tests, we need to know the exact pdfs for both null and alternative (respectively pre and post change) distributions. These pdfs are not unfortunately easy to obtain in high dimensional data-driven scenarios. Recent research has demonstrated that energy, score-based and diffusion methods produce state of the art models in high-dimensional setting   Additionally, these models are extremely robust to potential noises in the collected data. Unfortunately, these models are unnormalized. Calculating the partition functions required for their normalization is a notoriously difficult problem.  In some cases, the gradients of log likelihoods are modeled by deep neural networks, and we may not have access even to their unnormalized versions. This limits their applicability to log-likelihood based tests.

This motivates our work where we have developed Fisher-inspired methods for hypothesis testing, quickest change detection, and their robust versions (when the hypotheses or pre- and post-change distributions or both may not be exactly known) for unnormalized models. We will discuss our results and demonstrate their applications to various scenarios including out of distribution detection.

This talk is based on a series of papers with Taposh Banerjee, Enmao Diao, Jie Ding, Khalil Elkhalil and Suya Wu.

Biography:

Dr. Vahid Tarokh received the PhD degree from the University of Waterloo in 1995. He worked at AT&T Labs-Research until 2000. From 2000-2002, he was an Associate Professor at Massachusetts Institute of Technology (MIT). In 2002, he joined Harvard University as a Hammond Vinton Hayes Senior Fellow of Electrical Engineering and Perkins Professor of Applied Mathematics. He joined Duke University in Jan 2018, as the Rhodes Family Professor of Electrical and Computer Engineering. He was also a Gordon Moore Distinguished Research Fellow at CALTECH in 2018. Between Jan 2019-Dec 2021, he was also a Microsoft Data Science Investigator at Duke University.