BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:19451014T230000
TZOFFSETFROM:+0630
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230917T073100Z
UID:B2997D98-F36D-4773-B46B-3A1FCDFC0DA9
DTSTART;TZID=Asia/Kolkata:20230915T151500
DTEND;TZID=Asia/Kolkata:20230915T171500
DESCRIPTION:Throughout the event\, attendees witnessed the demystification 
 of the world of DNN as its true potential in data analysis was unlocked. L
 earning with meaningful constraints and the implementation of sparse codin
 g techniques left a profound impact on feature extraction. By imposing lim
 itations and emphasizing sparsity\, distinctive and sparse discriminative 
 features were extracted from the data. These features became essential com
 ponents of the original sets of objects and were visually represented as s
 parse basis vectors.\nFurthermore\, the integration of sparse basis functi
 ons\, such as receptive fields or filters\, enhanced transparency by effec
 tively overlaying and reconstructing them with minimal reconstruction erro
 r. This approach ensured a clear and comprehensive understanding of the un
 derlying concepts\, marking a significant milestone in the field of data r
 epresentation.\n\nTechniques discussed are\n(1) Nonnegative Matrix Factori
 zation that reduces the number of basis vectors and allows for extraction 
 of latent features that are additive and hence interpretable for humans.\n
 (2) A classic EBP architecture can also be trained under the constraints o
 f nonnegativity and sparseness. The resulting classifiers allow for identi
 fication of parts of the objects encoded as receptive fields developed by 
 weights of hidden neurons. The results are illustrated with MNIST handwrit
 ten digits classifiers and Reuters-21578 text categorization.\n(3) A const
 rained learning of sparse non-negative weights in auto-encoders also allow
 s for discovery of additive latent factors. Our experiments with MNIST dat
 asets compare the auto-encoder accuracy for various training conditions. T
 hey indicate enhanced interpretability and insights through identification
  of parts of complex input objects traded-off for a small reduction of rec
 ognition accuracy or classification error.\n\nSpeaker(s): Jacek M. Zurada\
 , \n\nAgenda: \nThe event\, offered attendees a unique opportunity to expe
 rience the future of data\nrepresentation. Deep neural networks (DNN) had 
 long grappled with issues related to\ntransparency and complex mappings in
  discriminative data representation. However\,\nduring the event\, experts
  unveiled cutting-edge techniques that addressed these\nchallenges\, ultim
 ately bidding farewell to the complexities caused by cancellations of\npos
 itive and negative terms. These breakthroughs ushered in a new era of clar
 ity and\nefficiency in the computation process.\n\nRoom: Seminar Hall\, Bl
 dg: Department of CSE\, JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABA
 D\, KUKATPALLY HOUSING BOARD COLONY\, KUKATPALLY\, Hyderabad\, Andhra Prad
 esh\, India\, 500085\, Virtual: https://events.vtools.ieee.org/m/373231
LOCATION:Room: Seminar Hall\, Bldg: Department of CSE\, JAWAHARLAL NEHRU TE
 CHNOLOGICAL UNIVERSITY HYDERABAD\, KUKATPALLY HOUSING BOARD COLONY\, KUKAT
 PALLY\, Hyderabad\, Andhra Pradesh\, India\, 500085\, Virtual: https://eve
 nts.vtools.ieee.org/m/373231
ORGANIZER:
SEQUENCE:20
SUMMARY:Inside the Black Box: Deep Learning with Sparse Coding\, Additive F
 eatures and Nonnegative Matrix Decompositions
URL;VALUE=URI:https://events.vtools.ieee.org/m/373231
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Throughout the event\, attendees witnessed
  the demystification of the world of DNN as its true potential in data ana
 lysis was unlocked. Learning with meaningful constraints and the implement
 ation of sparse coding techniques left a profound impact on feature extrac
 tion. By imposing limitations and emphasizing sparsity\, distinctive and s
 parse discriminative features were extracted from the data. These features
  became essential components of the original sets of objects and were visu
 ally represented as sparse basis vectors.&lt;br /&gt;Furthermore\, the integrati
 on of sparse basis functions\, such as receptive fields or filters\, enhan
 ced transparency by effectively overlaying and reconstructing them with mi
 nimal reconstruction error. This approach ensured a clear and comprehensiv
 e understanding of the underlying concepts\, marking a significant milesto
 ne in the field of data representation.&lt;/p&gt;\n&lt;p&gt;Techniques discussed are&lt;b
 r /&gt;(1) Nonnegative Matrix Factorization that reduces the number of basis 
 vectors and allows for extraction of latent features that are additive and
  hence interpretable for humans.&lt;br /&gt;(2) A classic EBP architecture can a
 lso be trained under the constraints of nonnegativity and sparseness. The 
 resulting classifiers allow for identification of parts of the objects enc
 oded as receptive fields developed by weights of hidden neurons. The resul
 ts are illustrated with MNIST handwritten digits classifiers and Reuters-2
 1578 text categorization.&lt;br /&gt;(3) A constrained learning of sparse non-ne
 gative weights in auto-encoders also allows for discovery of additive late
 nt factors. Our experiments with MNIST datasets compare the auto-encoder a
 ccuracy for various training conditions. They indicate enhanced interpreta
 bility and insights through identification of parts of complex input objec
 ts traded-off for a small reduction of recognition accuracy or classificat
 ion error.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;The event\, offered attendees a
  unique opportunity to experience the future of data&amp;nbsp\;&lt;br /&gt;represent
 ation. Deep neural networks (DNN) had long grappled with issues related to
 &amp;nbsp\;&lt;br /&gt;transparency and complex mappings in discriminative data repr
 esentation. However\,&amp;nbsp\;&lt;br /&gt;during the event\, experts unveiled cutt
 ing-edge techniques that addressed these&amp;nbsp\;&lt;br /&gt;challenges\, ultimate
 ly bidding farewell to the complexities caused by cancellations of&amp;nbsp\;&lt;
 br /&gt;positive and negative terms. These breakthroughs ushered in a new era
  of clarity and&amp;nbsp\;&lt;br /&gt;efficiency in the computation process.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

