BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20180311T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20171105T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20180503T000000Z
UID:87D7985F-5A69-4E46-BEF8-E5A3C32C4F79
DTSTART;TZID=US/Eastern:20180205T190000
DTEND;TZID=US/Eastern:20180205T203000
DESCRIPTION:During our february monthly event\, Dr. Matthew Phillips from K
 itware will present about the computer vision topic involving deep learnin
 g - understanding objects and scenes with Generative Adversarial Networks(
 GANs).\n\nGenerative adversarial networks (GANs) are a class of artificial
  intelligence algorithms used in unsupervised machine learning\, implement
 ed by a system of two neural networks contesting with each other in a zero
 -sum game framework.\n\nIn simple words\, GANs are artificial neural netwo
 rks that work together to give better answers. One neural network is the t
 ricky network\, and the other one is the useful network. The tricky networ
 k will try to give an input to the useful network that will cause the usef
 ul network to give a bad answer. The useful network will then learn not to
  give a bad answer\, and the tricky network will try to trick the useful n
 etwork again. As this continues\, the useful network will get better and n
 ot become tricked as often\, and the useful network will be able to be use
 d to make good predictions.\n\nGANs provide a way to learn deep representa
 tions without extensively annotated training data. They achieve this throu
 gh deriving backpropagation signals through a competitive process involvin
 g a pair of networks.\n\nGANs have received wide attention in the machine 
 learning field because of their potential to learn high-dimensional\, comp
 lex real data. Specifically\, they do not perform distribution assumptions
  and can simply infer real-like samples from latent space. This powerful p
 roperty leads GANs to be applied to various applications such as image syn
 thesis\, image attribute editing\, image translation\, image super-resolut
 ion and classification.\n\nBio of the speaker:\n\nDr. Matthew Phillips rec
 eived a B.A. in philosophy and mathematics from Tufts University\, and he 
 received a Ph.D. in philosophy with a certificate in cognitive science fro
 m Rutgers University. After he completed his Ph.D.\, Dr. Phillips moved in
 to neuroscience\, where he first focused on visual/oculomotor psychophysic
 s. He later shifted to primate cellular electrophysiology and\, finally\, 
 to computational neuroscience. His focus on computational neuroscience bro
 ught him to Duke University and the Research Triangle Park (RTP). In the R
 TP\, Matt worked briefly as a C++ engineer and pursued side projects in ma
 chine learning. He now works full time in machine learning and computer vi
 sion at Kitware.\n\nDr. Phillips has received numerous awards\, grants and
  fellowships. In 2005\, he received the James S. McDonnell Foundation 21st
  Century Postdoctoral Fellowship Award\, which he declined. A year later\,
  he received a Fight for Sight fellowship. Then\, in 2008\, he accepted an
  award to present at the Advances in Computational Motor Control meeting. 
 In the same year\, he began a Postdoctoral Individual National Research Se
 rvice Award from the National Eye Institute of the National Institutes of 
 Health for “Using saccadic adaptation to probe the coordinate system of 
 parietal neurons.” Later\, in 2012\, he received a research grant from t
 he American Academy of Neurology for “Assessing efficiency of learning t
 he neurologic exam with a visual tracking device” with co-principal inve
 stigator James Noble\, MD.\n\nDr. Phillips has also contributed to conside
 rable applications development and open source projects including signal p
 rocessing algorithms\, neural waveforms and computer vision based analytic
 s.\n\nPlease refer to the following link to know more about Dr. Matthew Ph
 illips\,\n\nhttps://www.kitware.com/matthew-phillips/\n\nIEEE ENCS RA24 ch
 apter appreciates the passion\, drive\, highly impressive efforts from Dr.
  Matthew Phillips and wishes the very best to his career &amp; life.\n\nAgenda
 : \n6:20-7:00pm Networking with pizza and soda\n\nThe times below are appr
 oximate and are given just as a guideline:\n\n7:00-7:10pm News and announc
 ements\n\n7:10-7:50pm Dr. Matthew Phillips session about techniques unders
 tanding objects and scenes with Generative Adversial Networks.\n\n7:50-8:0
 0pm Brief Q/A from IEEE students\, hobbyists\, professionals and seniors\n
 \n8:00-8:30pm Show-n-Tell (if interested\, bring out your projects to show
  to other members)\n\nRoom: 1007\, Bldg: Engineering Building 1 (EB1)\, 91
 1 Partners Way\, Raleigh\, North Carolina\, United States\, 27606
LOCATION:Room: 1007\, Bldg: Engineering Building 1 (EB1)\, 911 Partners Way
 \, Raleigh\, North Carolina\, United States\, 27606
ORGANIZER:mbalasu@ncsu.edu
SEQUENCE:4
SUMMARY:Tripping the light GANtastic: Understanding objects and scenes with
  Generative Adversarial Networks - Dr. Matthew Phillips
URL;VALUE=URI:https://events.vtools.ieee.org/m/160206
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;During our february monthly event\, &lt;stron
 g&gt;Dr. Matthew Phillips &lt;/strong&gt;from Kitware will present about the comput
 er vision topic involving deep learning -&lt;strong&gt; understanding objects an
 d scenes with &lt;/strong&gt;&lt;strong&gt;Generative Adversarial Networks(GANs). &lt;/st
 rong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Generative adversarial networks (GANs)&lt;/strong&gt;&amp;nbsp
 \;are a class of artificial intelligence algorithms used in unsupervised m
 achine learning\, implemented by a system of two neural networks contestin
 g with each other in a zero-sum game framework.&lt;/p&gt;\n&lt;p&gt;In simple words\,&lt;
 strong&gt; GANs&lt;/strong&gt;&amp;nbsp\;are artificial neural networks that work toget
 her to give better answers. One neural network is the tricky network\, and
  the other one is the useful network. The tricky network will try to give 
 an input to the useful network that will cause the useful network to give 
 a bad answer. The useful network will then learn not to give a bad answer\
 , and the tricky network will try to trick the useful network again. As th
 is continues\, the useful network will get better and not become tricked a
 s often\, and the useful network will be able to be used to make good pred
 ictions.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;GANs&lt;/strong&gt; provide a way to learn deep represe
 ntations without extensively annotated training data. They achieve this th
 rough deriving backpropagation signals through a competitive process invol
 ving a pair of networks.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;GANs&lt;/strong&gt; have received wide 
 attention in the machine learning field because of their potential to lear
 n high-dimensional\, complex real data. Specifically\, they do not perform
  distribution assumptions and can simply infer real-like samples from late
 nt space. This powerful property leads GANs to be applied to various appli
 cations such as image synthesis\, image attribute editing\, image translat
 ion\, image super-resolution and classification.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;&lt;u&gt;Bio of
  the speaker:&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Dr. Matthew Phillips &lt;/strong&gt;r
 eceived a B.A. in philosophy and mathematics from Tufts University\, and h
 e received a Ph.D. in philosophy with a certificate in cognitive science f
 rom Rutgers University. &lt;strong&gt;After he completed his Ph.D.\, Dr. Phillip
 s moved into neuroscience\, where he first focused on visual/oculomotor ps
 ychophysics. He later shifted to primate cellular electrophysiology and\, 
 finally\, to computational neuroscience. His focus on computational neuros
 cience brought him to Duke University and the Research Triangle Park (RTP)
 . In the RTP\, Matt worked briefly as a C++ engineer and pursued side proj
 ects in machine learning. He now works full time in machine learning&amp;nbsp\
 ;and computer vision&amp;nbsp\;at Kitware.&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Dr. Philli
 ps has received numerous awards\, grants and fellowships. In 2005\, he rec
 eived the James S. McDonnell Foundation 21st Century Postdoctoral Fellowsh
 ip Award\, which he declined. A year later\, he received a Fight for Sight
  fellowship. Then\, in 2008\, he accepted an award to present at the Advan
 ces in Computational Motor Control meeting. In the same year\, he began a 
 Postdoctoral Individual National Research Service Award from the National 
 Eye Institute of the National Institutes of Health for &amp;ldquo\;Using sacca
 dic adaptation to probe the coordinate system of parietal neurons.&amp;rdquo\;
  Later\, in 2012\, he received a research grant from the American Academy 
 of Neurology for &amp;ldquo\;Assessing efficiency of learning the neurologic e
 xam with a visual tracking device&amp;rdquo\; with co-principal investigator J
 ames Noble\, MD.&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Dr. Phillips has also contributed to con
 siderable applications development and open source projects including sign
 al processing algorithms\, neural waveforms and computer vision based anal
 ytics.&lt;/p&gt;\n&lt;p&gt;Please refer to the following link to know more about Dr. M
 atthew Phillips\,&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;https://www.kitware.com/matthew-phillips
 /&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;IEEE ENCS RA24 chapter appreciates the passion\, drive\
 , highly impressive efforts from Dr. Matthew Phillips and wishes the very 
 best to his career &amp;amp\; life.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;p&gt;6:20-7:00p
 m Networking with pizza and soda&lt;/p&gt;\n&lt;p&gt;The times below are approximate a
 nd are given just as a guideline:&lt;/p&gt;\n&lt;p&gt;7:00-7:10pm News and announcemen
 ts&lt;/p&gt;\n&lt;p&gt;7:10-7:50pm Dr. Matthew Phillips session about techniques under
 standing objects and scenes with Generative Adversial Networks.&lt;/p&gt;\n&lt;p&gt;7:
 50-8:00pm Brief Q/A from IEEE students\, hobbyists\, professionals and sen
 iors&lt;/p&gt;\n&lt;p&gt;8:00-8:30pm Show-n-Tell (if interested\, bring out your proje
 cts to show to other members)&lt;/p&gt;
END:VEVENT
END:VCALENDAR

