BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Eastern
BEGIN:DAYLIGHT
DTSTART:20210314T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220328T183956Z
UID:E424C3AA-87A4-4D30-9E99-BDEAC9BEC900
DTSTART;TZID=Canada/Eastern:20211102T170000
DTEND;TZID=Canada/Eastern:20211102T183000
DESCRIPTION:Prerequisites:\nIf you know zero math and zero machine learning
 \, then this talk is for you. Jeff will do his best to explain fairly hard
  mathematics to you. If you know a bunch of math and/or a bunch machine le
 arning\, then these talks are for you. Jeff tries to spin the ideas in new
  ways.\nAbstract:\nComputers can now drive cars and find cancer in x-rays.
  For better or worse\, this will change the world (and the job market). St
 rangely designing these algorithms is not done by telling the computer wha
 t to do or even by understanding what the computer does. The computers lea
 rn themselves from lots and lots of data and lots of trial and error. This
  learning process is more analogous to how brains evolved over billions of
  years of learning. The machine itself is a neural network which models bo
 th the brain and silicon and-or-not circuits\, both of which are great for
  computing. The only difference with neural networks is that what they com
 pute is determined by weights and small changes in these weights give you 
 small changes in the result of the computation. The process for finding an
  optimal setting of these weights is analogous to finding the bottom of a 
 valley. &quot;Gradient Decent&quot; achieves this by using the local slope of the hi
 ll (derivatives) to direct the travel down the hill\, i.e. small changes t
 o the weights.\n\nSpeaker(s): Prof. Jeff Edmonds\, \n\nVirtual: https://ev
 ents.vtools.ieee.org/m/287252
LOCATION:Virtual: https://events.vtools.ieee.org/m/287252
ORGANIZER:ayda.naserialiabadi@ryerson.ca
SEQUENCE:2
SUMMARY:Intro to the Mathematics in Machine Learning
URL;VALUE=URI:https://events.vtools.ieee.org/m/287252
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;br /&gt;If yo
 u know zero math and zero machine learning\, then this talk is for you. Je
 ff will do his best to explain fairly hard mathematics to you. If you know
  a bunch of math and/or a bunch machine learning\, then these talks are fo
 r you. Jeff tries to spin the ideas in new ways.&lt;br /&gt;&lt;strong&gt;Abstract:&lt;/s
 trong&gt;&lt;br /&gt;Computers can now drive cars and find cancer in x-rays. For be
 tter or worse\, this will change the world (and the job market). Strangely
  designing these algorithms is not done by telling the computer what to do
  or even by understanding what the computer does. The computers learn them
 selves from lots and lots of data and lots of trial and error. This learni
 ng process is more analogous to how brains evolved over billions of years 
 of learning. The machine itself is a neural network which models both the 
 brain and silicon and-or-not circuits\, both of which are great for comput
 ing. The only difference with neural networks is that what they compute is
  determined by weights and small changes in these weights give you small c
 hanges in the result of the computation. The process for finding an optima
 l setting of these weights is analogous to finding the bottom of a valley.
  &quot;Gradient Decent&quot; achieves this by using the local slope of the hill (der
 ivatives) to direct the travel down the hill\, i.e. small changes to the w
 eights.&amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

