BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Eastern
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220328T183812Z
UID:202645D1-AB52-422B-991B-0870F8292AD1
DTSTART;TZID=Canada/Eastern:20211116T170000
DTEND;TZID=Canada/Eastern:20211116T183000
DESCRIPTION:Prerequisites:\nYou do not need to have attended the earlier ta
 lks. If you know zero math and zero machine learning\, then this talk is f
 or you. Jeff will do his best to explain fairly hard mathematics to you. I
 f you know a bunch of math and/or a bunch machine learning\, then these ta
 lks are for you. Jeff tries to spin the ideas in new ways.\nLonger Abstrac
 t:\nThere is some theory. If a machine is found that gives the correct ans
 wers on the randomly chosen training data without simply memorizing\, then
  we can prove that with high probability this same machine will also work 
 well on never seen before instances drawn from the same distribution. The 
 easy proof requires D&gt;m\, where m is the number of bits needed to describe
  your learned machine and D is the number of train data items. A much hard
 er proof (which we likely won&#39;t cover) requires only D&gt;VC\, where VC is VC
 -dimension (Vapnikâ€“Chervonenkis) of your machine. The second requir
 ement is easier to meet because VC&lt;m.\n\nSpeaker(s): Prof. Jeff Edmonds\, 
 \n\nVirtual: https://events.vtools.ieee.org/m/287720
LOCATION:Virtual: https://events.vtools.ieee.org/m/287720
ORGANIZER:ayda.naserialiabadi@ryerson.ca
SEQUENCE:1
SUMMARY:Generalizing from Training Data
URL;VALUE=URI:https://events.vtools.ieee.org/m/287720
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;br /&gt;You d
 o not need to have attended the earlier talks. If you know zero math and z
 ero machine learning\, then this talk is for you. Jeff will do his best to
  explain fairly hard mathematics to you. If you know a bunch of math and/o
 r a bunch machine learning\, then these talks are for you. Jeff tries to s
 pin the ideas in new ways.&lt;br /&gt;&lt;strong&gt;Longer Abstract:&lt;/strong&gt;&lt;br /&gt;The
 re is some theory. If a machine is found that gives the correct answers on
  the randomly chosen training data without simply memorizing\, then we can
  prove that with high probability this same machine will also work well on
  never seen before instances drawn from the same distribution. The easy pr
 oof requires D&amp;gt\;m\, where m is the number of bits needed to describe yo
 ur learned machine and D is the number of train data items. A much harder 
 proof (which we likely won&#39;t cover) requires only D&amp;gt\;VC\, where VC is V
 C-dimension (Vapnik&amp;acirc\;&amp;euro\;&amp;ldquo\;Chervonenkis) of your machine. T
 he second requirement is easier to meet because VC&amp;lt\;m.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

