IEEE SPS SBC Webinar: Interpretable Convolutional NNs and Graph CNNs (By Dr. Danilo P. Mandic)
The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume – the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development.
Date and Time
Location
Hosts
Registration
- Date: 11 Jun 2024
- Time: 02:30 PM to 03:30 PM
- All times are (UTC+05:30) Chennai
- Add Event to Calendar
Speakers
Dr. Danilo P. Mandic
Interpretable Convolutional NNs and Graph CNNs: Role of Domain Knowledge
The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume – the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development.
Biography:
Danilo P. Mandic is a Professor of Machine Intelligence with Imperial College London, UK, and has been working in the areas of machine intelligence, statistical signal processing, big data, data analytics on graphs, bioengineering, and financial modelling. He is a Fellow of the IEEE and a current President of the International Neural Networks Society (INNS). Dr Mandic is a Director of the Financial Machine Intelligence Lab at Imperial, and has more than 600 publications in international journals and conferences. He has published two research monographs on neural networks, entitled “Recurrent Neural Networks for Prediction”, Wiley 2001, and “Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural models”, Wiley 2009 (both first books in their respective areas), and has co-edited books on Data Fusion (Springer 2008) and Neuro- and Bio-Informatics (Springer 2012). He has also co-authored a two-volume research monograph on tensor networks for Big Data, entitled “Tensor Networks for Dimensionality Reduction and Large Scale Optimization” (Now Publishers, 2016 and 2017), and more recently a research monograph on Data Analytics on Graphs (Now Publishers, 2021).
Dr Mandic is a 2019 recipient of the Dennis Gabor Award for "Outstanding Achievements in Neural Engineering", given by the International Neural Networks Society. He a the 2023 Winner of The Prize Paper Award, given by the IEEE Engineering in Medicine and Biology Society for his Smart Helmet article, the 2018 winner of the Best Paper Award in IEEE Signal Processing Magazine for his article on tensor decompositions for signal processing applications, and the 2021 winner of the Outstanding Paper Award in the International Conference on Acoustics, Speech and Signal Processing (ICASSP) series of conferences. Dr Mandic served in various roles in the Word Congress on Computational Intelligence (WCCI) and International Joint Conference on Neural Networks (IJCNN) series of conferences, and as an Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, IEEE Signal Processing Magazine and IEEE Transactions on Signal Processing. He has given mora than 70 Keynote and Tutorial lectures in international conferences and was appointed by the World University Service (WUS), as a Visiting Lecturer within the Brain Gain Program (BGP), in 2015.
Danilo is currently serving as a Distinguished Lecturer for the IEEE Computational Society and a Distinguished Lecturer for the IEEE Signal Processing Society. Dr Mandic is a 2014 recipient of President Award for Excellence in Postgraduate Supervision at Imperial College and holds six patents.
Address:Imperial College London,, , UK