BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
BEGIN:DAYLIGHT
DTSTART:20231001T030000
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=10
TZNAME:AEDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20240407T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:AEST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240210T060035Z
UID:4B9C3570-A2C9-46AC-B6CB-AA88FDCCAA89
DTSTART;TZID=Australia/Melbourne:20240207T170000
DTEND;TZID=Australia/Melbourne:20240207T183000
DESCRIPTION:In this era of AI\, often deep neural networks (DNNs) are viewe
 d as an “all cure” solution\; their success in a wide range of problem
 s needs no mention. Unfortunately\, such systems are generally black boxes
  and somewhat related to how biological neural networks (brains) which are
  claimed to develop “intelligence”. Although it is hard to define “i
 ntelligence”\, achieving human-level performance (or even better) in dif
 ferent decision making problems using systems with billions/trillions of f
 ree parameters trained on huge data sets may not necessarily make a system
  intelligent! In this context\, we shall discuss how the design of some of
  the AI systems used inspiration from neuroscience knowingly or unknowingl
 y. We argue that to realize human like intelligence a closer interaction b
 etween discoveries in neuroscience and designing of AI systems is needed. 
 However\, we need to acknowledge that the brain is probably the most compl
 ex object in the known world with more unknowns than knowns\, although we 
 know a lot about it. Truly brain-inspired models may be able to put a brak
 e on the apparently unsustainable philosophy of “bigger the better” 
 – bigger architecture and bigger datasets. We shall then discuss some of
  our attempts to exploit neuroscience models/discoveries to develop patter
 n-recognition systems (intentionally we are avoiding the term AI). Particu
 larly\, we shall demonstrate that exploiting at a high-level some findings
  from cat’s visual cortex can make a multilayer perceptron a bit more co
 mprehensible. We shall also discuss how computational models of the cells 
 like Lateral Geniculate Nucleus (LGN) cells and Retinal Ganglion Cells can
  be used to extract features from images to use as inputs to CNNs with a v
 iew to investigating if it can improve the complexity and performance of t
 he system. The answers turn out to be affirmative.\n\nCo-sponsored by: Swi
 nburne University of Technology\n\nSpeaker(s): Prof. Nikhil R. Pal\, \n\nR
 oom: Room 202\, Bldg: AGSE202 (Australian Graduate School of Entrepreneurs
 hip Building\, Room 202)\, Wakefield St\, Hawthorn VIC 3122\, Australia\, 
 Melbourne\, Victoria\, Australia\, 3122\, Virtual: https://events.vtools.i
 eee.org/m/404166
LOCATION:Room: Room 202\, Bldg: AGSE202 (Australian Graduate School of Entr
 epreneurship Building\, Room 202)\, Wakefield St\, Hawthorn VIC 3122\, Aus
 tralia\, Melbourne\, Victoria\, Australia\, 3122\, Virtual: https://events
 .vtools.ieee.org/m/404166
ORGANIZER:saeid.nahavandi@ieee.org
SEQUENCE:4
SUMMARY:Artificial Intelligence with/without Biological Intelligence: Some 
 Tidbits (Interplay between AI and BI)
URL;VALUE=URI:https://events.vtools.ieee.org/m/404166
X-ALT-DESC:Description: &lt;br /&gt;&lt;p style=&quot;font-weight: 400\;&quot;&gt;In this era of 
 AI\, often deep neural networks (DNNs) are viewed as an &amp;ldquo\;all cure&amp;r
 dquo\; solution\; their success in a wide range of problems needs no menti
 on. Unfortunately\, such systems are generally black boxes and somewhat re
 lated to how biological neural networks (brains) which are claimed to deve
 lop &amp;ldquo\;intelligence&amp;rdquo\;. Although it is hard to define &amp;ldquo\;in
 telligence&amp;rdquo\;\, achieving human-level performance (or even better) in
  different decision making problems using systems with billions/trillions 
 of free parameters trained on huge data sets may not necessarily make a sy
 stem intelligent! In this context\, we shall discuss how the design of som
 e of the AI systems used inspiration from neuroscience knowingly or unknow
 ingly. We argue that to realize human like intelligence a closer interacti
 on between discoveries in neuroscience and designing of AI systems is need
 ed. However\, we need to acknowledge that the brain is probably the most c
 omplex object in the known world with more unknowns than knowns\, although
  we know a lot about it. Truly brain-inspired models may be able to put a 
 brake on the apparently unsustainable philosophy of &amp;ldquo\;bigger the bet
 ter&amp;rdquo\; &amp;ndash\; bigger architecture and bigger datasets. We shall the
 n discuss some of our attempts to exploit neuroscience models/discoveries 
 to develop pattern-recognition systems (intentionally we are avoiding the 
 term AI). Particularly\, we shall demonstrate that exploiting at a high-le
 vel some findings from cat&amp;rsquo\;s visual cortex can make a multilayer pe
 rceptron a bit more comprehensible. We shall also discuss how computationa
 l models of the cells like Lateral Geniculate Nucleus (LGN) cells and Reti
 nal Ganglion Cells can be used to extract features from images to use as i
 nputs to CNNs with a view to investigating if it can improve the complexit
 y and performance of the system. The answers turn out to be affirmative.&lt;/
 p&gt;
END:VEVENT
END:VCALENDAR

