BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Eastern
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220328T183909Z
UID:5C39333A-90E8-4F16-AEB6-1528E2D52D5A
DTSTART;TZID=Canada/Eastern:20211109T170000
DTEND;TZID=Canada/Eastern:20211109T183000
DESCRIPTION:--- Prerequisites ---\n\nYou do not need to have attended the e
 arlier talks. If you know zero math and zero machine learning\, then this 
 talk is for you. Jeff will do his best to explain fairly hard mathematics 
 to you. If you know a bunch of math and/or a bunch machine learning\, then
  these talks are for you. Jeff tries to spin the ideas in new ways.\n\n---
  Longer Abstract ---\n\nAn input data item\, eg a image of a cat\, is just
  a large tuple of real values. As such it can be thought as a point in som
 e high dimensional vector space. Whether the image is of a cat or a dog pa
 rtitions this vector space into regions. Classifying your image amounts to
  knowing which region the corresponding point is in. The dot product of tw
 o vectors tell us: whether our data scaled by coefficients meets a thresho
 ld\; how much two lists of properties correlate\; the cosine of the angle 
 between to directions\; and which side of a hyperplane your points is on. 
 A novice reading a machine learning paper might not get that many of the s
 ymbols are not real numbers but are matrices. Hence the product of two suc
 h symbols is matrix multiplication. Computing the output of your current n
 eural network on each of your training data items amounts to an alternatio
 n of such a matrix multiplications and of some non-linear rounding of your
  numbers to be closer to being 0-1 valued. Similarly\, back propagation co
 mputes the direction of steepest decent using a similar alternation\, exce
 pt backwards. The matrix way of thinking about a neural network also helps
  us understand how a neural network effectively performs a sequence linear
  and non-linear transformations changing the representation of our input u
 ntil the representation is one for which the answer can be determined base
 d which side of a hyperplane your point is on. Though people say that it i
 s &quot;obvious&quot;\, it was never clear to me which direction to head to get the 
 steepest decent. Slides Covered: http://www.eecs.yorku.ca/~jeff/courses/ma
 chine-learning\n\n/Machine_Learning_Made_Easy.pptx\n\n- Linear Regression\
 , Linear Separator\n\n- Neural Networks\n\n- Abstract Representations\n\n-
  Matrix Multiplication\n\n- Example\n\n- Vectors\n\n- Back Propagation\n\n
 - Sigmoid\n\nSpeaker(s): Prof. Jeff Edmonds\, \n\nVirtual: https://events.
 vtools.ieee.org/m/287446
LOCATION:Virtual: https://events.vtools.ieee.org/m/287446
ORGANIZER:ayda.naserialiabadi@ryerson.ca
SEQUENCE:1
SUMMARY:Algebra Review: How does one best think about all of these numbers
URL;VALUE=URI:https://events.vtools.ieee.org/m/287446
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;--- Prerequisites ---&lt;/p&gt;\n&lt;p&gt;You do not n
 eed to have attended the earlier talks. If you know zero math and zero mac
 hine learning\, then this talk is for you. Jeff will do his best to explai
 n fairly hard mathematics to you. If you know a bunch of math and/or a bun
 ch machine learning\, then these talks are for you. Jeff tries to spin the
  ideas in new ways.&lt;/p&gt;\n&lt;p&gt;--- Longer Abstract ---&lt;/p&gt;\n&lt;p&gt;An input data 
 item\, eg a image of a cat\, is just a large tuple of real values. As such
  it can be thought as a point in some high dimensional vector space. Wheth
 er the image is of a cat or a dog partitions this vector space into region
 s. Classifying your image amounts to knowing which region the correspondin
 g point is in. The dot product of two vectors tell us: whether our data sc
 aled by coefficients meets a threshold\; how much two lists of properties 
 correlate\; the cosine of the angle between to directions\; and which side
  of a hyperplane your points is on. A novice reading a machine learning pa
 per might not get that many of the symbols are not real numbers but are ma
 trices. Hence the product of two such symbols is matrix multiplication. Co
 mputing the output of your current neural network on each of your training
  data items amounts to an alternation of such a matrix multiplications and
  of some non-linear rounding of your numbers to be closer to being 0-1 val
 ued. Similarly\, back propagation computes the direction of steepest decen
 t using a similar alternation\, except backwards. The matrix way of thinki
 ng about a neural network also helps us understand how a neural network ef
 fectively performs a sequence linear and non-linear transformations changi
 ng the representation of our input until the representation is one for whi
 ch the answer can be determined based which side of a hyperplane your poin
 t is on. Though people say that it is &quot;obvious&quot;\, it was never clear to me
  which direction to head to get the steepest decent. Slides Covered: http:
 //www.eecs.yorku.ca/~jeff/courses/machine-learning&lt;/p&gt;\n&lt;p&gt;/Machine_Learni
 ng_Made_Easy.pptx&lt;/p&gt;\n&lt;p&gt;- Linear Regression\, Linear Separator&lt;/p&gt;\n&lt;p&gt;-
  Neural Networks&lt;/p&gt;\n&lt;p&gt;- Abstract Representations&lt;/p&gt;\n&lt;p&gt;- Matrix Multi
 plication&lt;/p&gt;\n&lt;p&gt;- Example&lt;/p&gt;\n&lt;p&gt;- Vectors&lt;/p&gt;\n&lt;p&gt;- Back Propagation&lt;/
 p&gt;\n&lt;p&gt;- Sigmoid&lt;/p&gt;
END:VEVENT
END:VCALENDAR

