Inside the Black Box: Deep Learning with Sparse Coding, Additive Features and Nonnegative Matrix Decompositions

#technical#ieeehyderabad#jntuhieee #grietieee
Share

The esteemed CIS/GRSS Joint Chapter, IEEE Hyderabad Section, in collaboration with 
the Department of CSE, JNTUH-UCESTH, and GRIET CIS SB Chapter, successfully 
organized John McCarthy Memorial Lecture (JML). The event, titled "Inside the Black 
Box: Deep Learning with Sparse Coding, Additive Features, and Nonnegative Matrix 
Decompositions," was held on September 15th from 3:15 pm to 5:15 pm.


Throughout the event, attendees witnessed the demystification of the world of DNN as its true potential in data analysis was unlocked. Learning with meaningful constraints and the implementation of sparse coding techniques left a profound impact on feature extraction. By imposing limitations and emphasizing sparsity, distinctive and sparse discriminative features were extracted from the data. These features became essential components of the original sets of objects and were visually represented as sparse basis vectors.
Furthermore, the integration of sparse basis functions, such as receptive fields or filters, enhanced transparency by effectively overlaying and reconstructing them with minimal reconstruction error. This approach ensured a clear and comprehensive understanding of the underlying concepts, marking a significant milestone in the field of data representation.

Techniques discussed are
(1) Nonnegative Matrix Factorization that reduces the number of basis vectors and allows for extraction of latent features that are additive and hence interpretable for humans.
(2) A classic EBP architecture can also be trained under the constraints of nonnegativity and sparseness. The resulting classifiers allow for identification of parts of the objects encoded as receptive fields developed by weights of hidden neurons. The results are illustrated with MNIST handwritten digits classifiers and Reuters-21578 text categorization.
(3) A constrained learning of sparse non-negative weights in auto-encoders also allows for discovery of additive latent factors. Our experiments with MNIST datasets compare the auto-encoder accuracy for various training conditions. They indicate enhanced interpretability and insights through identification of parts of complex input objects traded-off for a small reduction of recognition accuracy or classification error.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 15 Sep 2023
  • Time: 03:15 PM to 05:15 PM
  • All times are (UTC+05:30) Chennai
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
  • KUKATPALLY HOUSING BOARD COLONY, KUKATPALLY
  • Hyderabad, Andhra Pradesh
  • India 500085
  • Building: Department of CSE
  • Room Number: Seminar Hall

  • Contact Event Hosts


  Speakers

Jacek M. Zurada Jacek M. Zurada

Topic:

Inside the Black Box: Deep Learning with Sparse Coding, Additive Features and Nonnegative Matrix Decompositions

Biography:

Jacek M. Zurada (M’82-SM’83-F’96-LF’14) serves as a Professor of electrical and computer engineering with the University of Louisville, Louisville, KY, USA. He has authored or co-authored several books and over 450 papers in computational intelligence, neural networks, deep learning, logic rule extraction, and bioinformatics cited over 18,000 times (Google Scholar), and delivered over 120 presentations throughout the world. He has served as a Distinguished Speaker for three IEEE Societies. Dr. Zurada has been a Board Member of the IEEE, IEEE CIS, IEEE CSS and of INNS. He was a recipient of the 2013 Joe Desch Innovation Award, the 2015 Distinguished Presidential Service Award, and five honorary professorships. In 2022, he received an Honorary Doctorate (Dr. H.C.) from Czestochowa University of Technology, Poland. He served as the IEEE V-President and the Technical Activities Board (TAB) Chair in 2014. In 2010-13 he was the Chair of the IEEE Periodicals Committee and the IEEE Periodicals Review and Advisory Committee. From 2004 to 2005, he was the President of the IEEE Computational Intelligence Society and served as the Editor-in-Chief of the IEEE Transactions on Neural Networks (1997-2003). He was a nominee for 2019 and 2020 IEEE President.

Address:United States





Agenda

The event, offered attendees a unique opportunity to experience the future of data 
representation. Deep neural networks (DNN) had long grappled with issues related to 
transparency and complex mappings in discriminative data representation. However, 
during the event, experts unveiled cutting-edge techniques that addressed these 
challenges, ultimately bidding farewell to the complexities caused by cancellations of 
positive and negative terms. These breakthroughs ushered in a new era of clarity and 
efficiency in the computation process.



Overall it was an interactive session where attendees had the opportunity to ask questions, 
and the professor kindly addressed them, creating a friendly atmosphere. At the end of the 
event, Professor Jacek M. Zurada was honored and facilitated by the advisors and heads of 
departments from JNTUH and GRIET