BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Istanbul
BEGIN:DAYLIGHT
DTSTART:20380119T061407
TZOFFSETFROM:+0300
TZOFFSETTO:+0300
RRULE:FREQ=YEARLY;BYDAY=3TU;BYMONTH=1
TZNAME:+03
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20160907T000000
TZOFFSETFROM:+0300
TZOFFSETTO:+0300
RRULE:FREQ=YEARLY;BYDAY=1WE;BYMONTH=9
TZNAME:+03
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250719T080849Z
UID:E7C47248-8EF1-4732-B222-9327352E85FA
DTSTART;TZID=Europe/Istanbul:20250716T190000
DTEND;TZID=Europe/Istanbul:20250716T203000
DESCRIPTION:This event is the second session of the Reinforcement Learning 
 Talks Series organized by the IEEE Computer Society Türkiye Chapter. Buil
 ding on the foundational concepts introduced in the first talk\, this sess
 ion will focus on the practical application of reinforcement learning (RL)
  to real-world problems.\n\nReinforcement Learning has gained widespread a
 doption across domains such as robotics\, operations research\, industrial
  control\, and autonomous systems. This talk will walk participants throug
 h classic RL algorithms\, demonstrate step-by-step learning workflows\, an
 d highlight both the strengths and limitations of current approaches in pr
 actical scenarios.\n\nAttendees will gain an understanding of how to frame
  problems effectively for RL agents\, and what it takes to design agents t
 hat can learn efficiently through interaction\, feedback\, and structured 
 exploration.\n\nTopics to be covered include:\n\n-\nProblem formulation fo
 r real-world RL tasks\n\n-\nOverview of classic RL algorithms (e.g.\, Q-Le
 arning\, Policy Gradient)\n\n-\nThe structure of an RL learning loop in pr
 actice\n\n-\nChallenges in real-world RL applications and how to address t
 hem\n\n-\nDesign tips for creating effective and robust learning agents\n\
 nThis session is ideal for students\, researchers\, and practitioners look
 ing to apply reinforcement learning methods in real-world environments or 
 to prepare for more advanced work in AI.\n\nSpeaker(s): Emir Arditi\n\nVir
 tual: https://events.vtools.ieee.org/m/492269
LOCATION:Virtual: https://events.vtools.ieee.org/m/492269
ORGANIZER:reyhan.aydogan@ozyegin.edu.tr
SEQUENCE:15
SUMMARY:IEEE CS Türkiye AI Talk Series - From Trial to Triumph: The Founda
 tions of RL in Practice
URL;VALUE=URI:https://events.vtools.ieee.org/m/492269
X-ALT-DESC:Description: &lt;br /&gt;&lt;p data-start=&quot;259&quot; data-end=&quot;573&quot;&gt;This event
  is the &lt;strong data-start=&quot;277&quot; data-end=&quot;338&quot;&gt;second session of the Rein
 forcement Learning Talks Series&lt;/strong&gt; organized by the IEEE Computer So
 ciety T&amp;uuml\;rkiye Chapter. Building on the foundational concepts introdu
 ced in the first talk\, this session will focus on the &lt;strong data-start=
 &quot;493&quot; data-end=&quot;549&quot;&gt;practical application of reinforcement learning (RL)&lt;
 /strong&gt; to real-world problems.&lt;/p&gt;\n&lt;p data-start=&quot;575&quot; data-end=&quot;942&quot;&gt;R
 einforcement Learning has gained widespread adoption across domains such a
 s robotics\, operations research\, industrial control\, and autonomous sys
 tems. This talk will walk participants through &lt;strong data-start=&quot;768&quot; da
 ta-end=&quot;793&quot;&gt;classic RL algorithms&lt;/strong&gt;\, demonstrate &lt;strong data-sta
 rt=&quot;807&quot; data-end=&quot;842&quot;&gt;step-by-step learning workflows&lt;/strong&gt;\, and hig
 hlight both the &lt;strong data-start=&quot;867&quot; data-end=&quot;896&quot;&gt;strengths and limi
 tations&lt;/strong&gt; of current approaches in practical scenarios.&lt;/p&gt;\n&lt;p dat
 a-start=&quot;944&quot; data-end=&quot;1161&quot;&gt;Attendees will gain an understanding of how 
 to &lt;strong data-start=&quot;991&quot; data-end=&quot;1009&quot;&gt;frame problems&lt;/strong&gt; effect
 ively for RL agents\, and what it takes to &lt;strong data-start=&quot;1058&quot; data-
 end=&quot;1102&quot;&gt;design agents that can learn efficiently&lt;/strong&gt; through inter
 action\, feedback\, and structured exploration.&lt;/p&gt;\n&lt;p data-start=&quot;1163&quot; 
 data-end=&quot;1196&quot;&gt;&lt;strong data-start=&quot;1163&quot; data-end=&quot;1196&quot;&gt;Topics to be cov
 ered include:&lt;/strong&gt;&lt;/p&gt;\n&lt;ul data-start=&quot;1198&quot; data-end=&quot;1505&quot;&gt;\n&lt;li da
 ta-start=&quot;1198&quot; data-end=&quot;1245&quot;&gt;\n&lt;p data-start=&quot;1200&quot; data-end=&quot;1245&quot;&gt;Pro
 blem formulation for real-world RL tasks&lt;/p&gt;\n&lt;/li&gt;\n&lt;li data-start=&quot;1246&quot;
  data-end=&quot;1319&quot;&gt;\n&lt;p data-start=&quot;1248&quot; data-end=&quot;1319&quot;&gt;Overview of classi
 c RL algorithms (e.g.\, Q-Learning\, Policy Gradient)&lt;/p&gt;\n&lt;/li&gt;\n&lt;li data
 -start=&quot;1320&quot; data-end=&quot;1372&quot;&gt;\n&lt;p data-start=&quot;1322&quot; data-end=&quot;1372&quot;&gt;The s
 tructure of an RL learning loop in practice&lt;/p&gt;\n&lt;/li&gt;\n&lt;li data-start=&quot;13
 73&quot; data-end=&quot;1441&quot;&gt;\n&lt;p data-start=&quot;1375&quot; data-end=&quot;1441&quot;&gt;Challenges in r
 eal-world RL applications and how to address them&lt;/p&gt;\n&lt;/li&gt;\n&lt;li data-sta
 rt=&quot;1442&quot; data-end=&quot;1505&quot;&gt;\n&lt;p data-start=&quot;1444&quot; data-end=&quot;1505&quot;&gt;Design ti
 ps for creating effective and robust learning agents&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p
  data-start=&quot;1507&quot; data-end=&quot;1692&quot;&gt;This session is ideal for students\, re
 searchers\, and practitioners looking to apply reinforcement learning meth
 ods in real-world environments or to prepare for more advanced work in AI.
 &lt;/p&gt;
END:VEVENT
END:VCALENDAR

