BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20260308T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260106T191649Z
UID:71E9336B-AE9B-4E24-8DED-81FD1D3F118F
DTSTART;TZID=America/New_York:20251111T100000
DTEND;TZID=America/New_York:20251111T113000
DESCRIPTION:The Random Neural Network (RNN) is a mathematical model that ha
 s the required “learning” ability of a neural network\, since it is a 
 universal approximator for continuous and bounded functions. In neural net
 work terminology\, it is a “recurrent” model in the sense that it can 
 — in general — incorporate feedback loops\, and yet still has a well-d
 efined unique solution despite its non-linear computational structure. In 
 essence\, the RNN is a continuous time and discrete state-space multi-dime
 nsional Markov chain whose states are the n-vectors {k}\, of natural numbe
 rs\, where each natural number represents the instantaneous “excitation 
 level” or “discrete internal voltage” of each of the n neurons.\n\nI
 n this presentation we shall first define the RNN model and derive its Cha
 pman-Kolmogorov (differential-difference) equations that characterize the 
 underlying Markov chain. We will show that under certain conditions\, it h
 as a unique stationary solution that is obtained from an “exact non-line
 ar mean-field equation”. Furthermore\, similar to certain queueing netwo
 rks (Jackson\, BCMP) which have linear mean-field equations\, the RNN has 
 a Product Form Solution\, so that its stationary probability distribution 
 is the product of the marginal distributions associated to each individual
  neuron. The analytical structure we have described leads to an O(n^3) gra
 dient-based deep learning algorithm\, and to the use of other optimization
  techniques such as FISTA. Based on these results we will illustrate the u
 se of the RNN for very diverse applications\, such as patented anomaly det
 ection from Magnetic Resonance Images\, color texture learning and generat
 ion\, reinforcement learning based packet network routing\, and the detect
 ion of Botnets and other cyberattacks.\n\nSpeaker(s): Erol Gelenbe\, \n\nR
 oom: 1302\, Bldg: DC\, 200 University Ave W.\, Waterloo\, Ontario\, Canada
 \, N2L 3G1\, Virtual: https://events.vtools.ieee.org/m/531578
LOCATION:Room: 1302\, Bldg: DC\, 200 University Ave W.\, Waterloo\, Ontario
 \, Canada\, N2L 3G1\, Virtual: https://events.vtools.ieee.org/m/531578
ORGANIZER:mohammad.salahuddin@ieee.org
SEQUENCE:26
SUMMARY:The Random Neural Network and its Applications to Image Processing\
 , Network Routing\, and Cyberattack Detection
URL;VALUE=URI:https://events.vtools.ieee.org/m/531578
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;paragraph&quot;&gt;The Random Neural Networ
 k (RNN) is a mathematical model that has the required &amp;ldquo\;learning&amp;rdq
 uo\; &amp;nbsp\;ability of a neural network\, since it is a universal approxim
 ator for continuous and bounded functions. In neural network terminology\,
  it is a &amp;ldquo\;recurrent&amp;rdquo\; model in the sense that it can &amp;mdash\;
  in general &amp;mdash\; incorporate feedback loops\, and yet still has a well
 -defined unique solution despite its non-linear computational structure. I
 n essence\, the RNN is a continuous time and discrete state-space multi-di
 mensional Markov chain whose states are the n-vectors {k}\, of natural num
 bers\, where each natural number represents the instantaneous &amp;ldquo\;exci
 tation level&amp;rdquo\; or &amp;ldquo\;discrete internal voltage&amp;rdquo\; of each 
 of the n neurons.&lt;/p&gt;\n&lt;p class=&quot;paragraph&quot;&gt;In this presentation we shall 
 first define the RNN model and derive its Chapman-Kolmogorov (differential
 -difference) equations that characterize the underlying&amp;nbsp\; Markov chai
 n. We will show that under certain conditions\, it has a unique stationary
  solution that is obtained from an &amp;ldquo\;exact non-linear mean-field equ
 ation&amp;rdquo\;. Furthermore\, similar to certain queueing networks (Jackson
 \, BCMP) which have linear mean-field equations\, the RNN has a Product Fo
 rm Solution\, so that its stationary probability distribution is the produ
 ct of the marginal distributions associated to each individual neuron. The
  analytical structure we have described leads to an O(n^3) gradient-based 
 deep learning algorithm\, and to the use of other optimization techniques 
 such as FISTA. Based on these results we will illustrate the use of the RN
 N for very diverse applications\, such as patented anomaly detection from 
 Magnetic Resonance Images\, color texture learning and generation\, reinfo
 rcement learning based packet network routing\, and the detection of Botne
 ts and other cyberattacks.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

