BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20190310T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20191103T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20190926T100422Z
UID:59137978-D876-4C77-BD9C-C81D093BE050
DTSTART;TZID=America/New_York:20190925T183000
DTEND;TZID=America/New_York:20190925T203000
DESCRIPTION:We are in a new golden age of artificial intelligence research\
 , eclipsing the postwar zenith and driven by a fundamentally different con
 ceptual approach. It&#39;s powered by big data\, new storage and processing ca
 pacities\, and pattern-discrimination systems that learn from those data\,
  constructing rules for their behavior as they go. This isn’t just in th
 e lab. These machine learning systems are implemented by all the big tech 
 companies in everything from ad auctions to photo-tagging\, and are supple
 menting or replacing human decision making in a host of more mundane\, but
  possibly more consequential\, areas like loans\, bail\, policing\, and hi
 ring. And we’ve already seen plenty of dangerous failures: Computer visi
 on systems that don’t recognize black faces or classify them as gorillas
 \, risk assessment tools systematically rating black arrestees as riskier 
 than white ones\, hiring algorithms that learned to reject women. These is
 sues force a fundamental reconsideration of core democratic values—not j
 ust in what decisions are made\, but how they are reached\, and with sort 
 of accountability. This talk will review fundamental issues of fairness an
 d equity in machine learning systems and demonstrate how they play out in 
 the specific domain of policing. Finally\, we will discuss emergent approa
 ches for designing\, auditing\, and regulating these systems\, and what we
  can learn both from other fields that have faced similar conflicts\, and 
 from activists on the ground.\n\nSpeaker(s): Dr Daniel Greene\, \n\nAgenda
 : \n6:30 PM to 7:00 PM - Refreshments and Networking\n\n7:00 PM - 7:05 PM 
 - Chapter announcements and Speaker Introduction\n\n7:05 PM - Talk\n\nRoom
 : Large Meeting Room (2nd Floor)\, Bldg: Tenley-Friendship Neighborhood Li
 brary\, 4450 Wisconsin Ave NW\, Metro Station: Tenleytown-AU\, Washington\
 , District of Columbia\, United States
LOCATION:Room: Large Meeting Room (2nd Floor)\, Bldg: Tenley-Friendship Nei
 ghborhood Library\, 4450 Wisconsin Ave NW\, Metro Station: Tenleytown-AU\,
  Washington\, District of Columbia\, United States
ORGANIZER:murtyp@ieee.org
SEQUENCE:4
SUMMARY:Garbage In\, Garbage Out: The Predictable and Unpredictable Challen
 ges of Regulating Machine Learning Systems
URL;VALUE=URI:https://events.vtools.ieee.org/m/200590
X-ALT-DESC:Description: &lt;br /&gt;&lt;p style=&quot;margin: 0px 0px 10.66px\;&quot;&gt;&lt;span st
 yle=&quot;font-family: Calibri\;&quot;&gt;We are in a new golden age of artificial inte
 lligence research\, eclipsing the postwar zenith and driven by a fundament
 ally different conceptual approach. It&#39;s powered by big data\, new storage
  and processing capacities\, and pattern-discrimination systems that learn
  from those data\, constructing rules for their behavior as they go. This 
 isn&amp;rsquo\;t just in the lab. These machine learning systems are implement
 ed by all the big tech companies in everything from ad auctions to photo-t
 agging\, and are supplementing or replacing human decision making in a hos
 t of more mundane\, but possibly more consequential\, areas like loans\, b
 ail\, policing\, and hiring. And we&amp;rsquo\;ve already seen plenty of dange
 rous failures: Computer vision systems that don&amp;rsquo\;t recognize black f
 aces or classify them as gorillas\, risk assessment tools systematically r
 ating black arrestees as riskier than white ones\, hiring algorithms that 
 learned to reject women. These issues force a fundamental reconsideration 
 of core democratic values&amp;mdash\;not just in what decisions are made\, but
  how they are reached\, and with sort of accountability. This talk will re
 view fundamental issues of fairness and equity in machine learning systems
  and demonstrate how they play out in the specific domain of policing. Fin
 ally\, we will discuss emergent approaches for designing\, auditing\, and 
 regulating these systems\, and what we can learn both from other fields th
 at have faced similar conflicts\, and from activists on the ground.&lt;span s
 tyle=&quot;margin: 0px\;&quot;&gt;&amp;nbsp\; &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;Agenda: &lt;br /&gt;&lt;
 p&gt;6:30 PM to 7:00 PM - Refreshments and Networking&lt;/p&gt;\n&lt;p&gt;7:00 PM - 7:05 
 PM - Chapter announcements and Speaker Introduction&lt;/p&gt;\n&lt;p&gt;7:05 PM - Talk
 &lt;/p&gt;
END:VEVENT
END:VCALENDAR

