Tripping the light GANtastic: Understanding objects and scenes with Generative Adversarial Networks - Dr. Matthew Phillips

#Computer #Vision #Aritificial #Intelligence #Artificial #Neural #Networks #Deep #Learning #machine #learning
Share

Tripping the light GANtastic: Understanding objects and scenes with Generative Adversarial Networks – Dr. Matthew Phillips


During our february monthly event, Dr. Matthew Phillips from Kitware will present about the computer vision topic involving deep learning - understanding objects and scenes with Generative Adversarial Networks(GANs).

Generative adversarial networks (GANs) are a class of artificial intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework.

In simple words, GANs are artificial neural networks that work together to give better answers. One neural network is the tricky network, and the other one is the useful network. The tricky network will try to give an input to the useful network that will cause the useful network to give a bad answer. The useful network will then learn not to give a bad answer, and the tricky network will try to trick the useful network again. As this continues, the useful network will get better and not become tricked as often, and the useful network will be able to be used to make good predictions.

GANs provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks.

GANs have received wide attention in the machine learning field because of their potential to learn high-dimensional, complex real data. Specifically, they do not perform distribution assumptions and can simply infer real-like samples from latent space. This powerful property leads GANs to be applied to various applications such as image synthesis, image attribute editing, image translation, image super-resolution and classification.

Bio of the speaker:

Dr. Matthew Phillips received a B.A. in philosophy and mathematics from Tufts University, and he received a Ph.D. in philosophy with a certificate in cognitive science from Rutgers University. After he completed his Ph.D., Dr. Phillips moved into neuroscience, where he first focused on visual/oculomotor psychophysics. He later shifted to primate cellular electrophysiology and, finally, to computational neuroscience. His focus on computational neuroscience brought him to Duke University and the Research Triangle Park (RTP). In the RTP, Matt worked briefly as a C++ engineer and pursued side projects in machine learning. He now works full time in machine learning and computer vision at Kitware.

Dr. Phillips has received numerous awards, grants and fellowships. In 2005, he received the James S. McDonnell Foundation 21st Century Postdoctoral Fellowship Award, which he declined. A year later, he received a Fight for Sight fellowship. Then, in 2008, he accepted an award to present at the Advances in Computational Motor Control meeting. In the same year, he began a Postdoctoral Individual National Research Service Award from the National Eye Institute of the National Institutes of Health for “Using saccadic adaptation to probe the coordinate system of parietal neurons.” Later, in 2012, he received a research grant from the American Academy of Neurology for “Assessing efficiency of learning the neurologic exam with a visual tracking device” with co-principal investigator James Noble, MD.

Dr. Phillips has also contributed to considerable applications development and open source projects including signal processing algorithms, neural waveforms and computer vision based analytics.

Please refer to the following link to know more about Dr. Matthew Phillips,

https://www.kitware.com/matthew-phillips/

IEEE ENCS RA24 chapter appreciates the passion, drive, highly impressive efforts from Dr. Matthew Phillips and wishes the very best to his career & life.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 05 Feb 2018
  • Time: 07:00 PM to 08:30 PM
  • All times are (GMT-05:00) US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
  • 911 Partners Way
  • Raleigh, North Carolina
  • United States 27606
  • Building: Engineering Building 1 (EB1)
  • Room Number: 1007

  • Contact Event Host
  • Mahesh Balasubramaniam

    mbalasu@ncsu.edu

  • Starts 02 February 2018 12:00 AM
  • Ends 05 February 2018 09:00 PM
  • All times are (GMT-05:00) US/Eastern
  • No Admission Charge






Agenda

6:20-7:00pm Networking with pizza and soda

The times below are approximate and are given just as a guideline:

7:00-7:10pm News and announcements

7:10-7:50pm Dr. Matthew Phillips session about techniques understanding objects and scenes with Generative Adversial Networks.

7:50-8:00pm Brief Q/A from IEEE students, hobbyists, professionals and seniors

8:00-8:30pm Show-n-Tell (if interested, bring out your projects to show to other members)