BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:19451014T230000
TZOFFSETFROM:+0630
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260204T182946Z
UID:936D876B-7E10-4824-B96A-946C69360DB9
DTSTART;TZID=Asia/Kolkata:20240611T143000
DTEND;TZID=Asia/Kolkata:20240611T153000
DESCRIPTION:The success of deep learning (DL) and convolutional neural netw
 orks (CNN) has also highlighted that NN-based analysis of signals and imag
 es of large sizes poses a considerable challenge\, as the number of NN wei
 ghts increases exponentially with data volume – the so called Curse of D
 imensionality. In addition\, the largely ad-hoc fashion of their developme
 nt\, albeit one reason for their rapid success\, has also brought to light
  the intrinsic limitations of CNNs - in particular those related to their 
 black box nature. To this end\, we revisit the operation of CNNs from firs
 t principles and show that their key component – the convolutional layer
  – effectively performs matched filtering of its inputs with a set of te
 mplates (filters\, kernels) of interest. This serves as a vehicle to estab
 lish a compact matched filtering perspective of the whole convolution-acti
 vation-pooling chain\, which allows for a theoretically well founded and p
 hysically meaningful insight into the overall operation of CNNs. This is s
 hown to help mitigate their interpretability and explainability issues\, t
 ogether with providing intuition for further developments and novel physic
 ally meaningful ways of their initialisation. Such an approach is next ext
 ended to Graph CNNs (GCNNs)\, which benefit from the universal function ap
 proximation property of NNs\, pattern matching inherent to CNNs\, and the 
 ability of graphs to operate on nonlinear domains. GCNNs are revisited sta
 rting from the notion of a system on a graph\, which serves to establish a
  matched-filtering interpretation of the whole convolution-activation-pool
 ing chain within GCNNs\, while inheriting the rigour and intuition from si
 gnal detection theory. This both sheds new light onto the otherwise black 
 box approach to GCNNs and provides well-motivated and physically meaningfu
 l interpretation at every step of the operation and adaptation of GCNNs. I
 t is our hope that the incorporation of domain knowledge\, which is centra
 l to this approach\, will help demystify CNNs and GCNNs\, together with es
 tablishing a common language between the diverse communities working on De
 ep Learning and opening novel avenues for their further development.\n\nSp
 eaker(s): Dr. Danilo P. Mandic\, \n\nVirtual: https://events.vtools.ieee.o
 rg/m/423821
LOCATION:Virtual: https://events.vtools.ieee.org/m/423821
ORGANIZER:ieee.sps.sb.iitkgp@gmail.com
SEQUENCE:16
SUMMARY:IEEE SPS SBC Webinar: Interpretable Convolutional NNs and Graph CNN
 s (By Dr. Danilo P. Mandic)
URL;VALUE=URI:https://events.vtools.ieee.org/m/423821
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong id=&quot;docs-internal-guid-cfd97124-7f
 ff-1532-aec4-9521a0a1b96d&quot;&gt;&amp;nbsp\;&lt;/strong&gt;The success of deep learning (D
 L) and convolutional neural networks (CNN) has also highlighted that NN-ba
 sed analysis of signals and images of large sizes poses a considerable cha
 llenge\, as the number of NN weights increases exponentially with data vol
 ume &amp;ndash\; the so called Curse of Dimensionality. In addition\, the larg
 ely ad-hoc fashion of their development\, albeit one reason for their rapi
 d success\, has also brought to light the intrinsic limitations of CNNs - 
 in particular those related to their black box nature. To this end\, we re
 visit the operation of CNNs from first principles and show that their key 
 component &amp;ndash\; the convolutional layer &amp;ndash\; effectively performs m
 atched filtering of its inputs with a set of templates (filters\, kernels)
  of interest. This serves as a vehicle to establish a compact matched filt
 ering perspective of the whole convolution-activation-pooling chain\, whic
 h allows for a theoretically well founded and physically meaningful insigh
 t into the overall operation of CNNs. This is shown to help mitigate their
  interpretability and explainability issues\, together with providing intu
 ition for further developments and novel physically meaningful ways of the
 ir initialisation. Such an approach is next extended to Graph CNNs (GCNNs)
 \, which benefit from the universal function approximation property of NNs
 \, pattern matching inherent to CNNs\, and the ability of graphs to operat
 e on nonlinear domains. GCNNs are revisited starting from the notion of a 
 system on a graph\, which serves to establish a matched-filtering interpre
 tation of the whole convolution-activation-pooling chain within GCNNs\, wh
 ile inheriting the rigour and intuition from signal detection theory. This
  both sheds new light onto the otherwise black box approach to GCNNs and p
 rovides well-motivated and physically meaningful interpretation at every s
 tep of the operation and adaptation of GCNNs. It is our hope that the inco
 rporation of domain knowledge\, which is central to this approach\, will h
 elp demystify CNNs and GCNNs\, together with establishing a common languag
 e between the diverse communities working on Deep Learning and opening nov
 el avenues for their further development.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

