BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241103T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241115T025813Z
UID:737718C0-E6D4-419C-A24B-4CC98948F801
DTSTART;TZID=America/Chicago:20241106T160000
DTEND;TZID=America/Chicago:20241106T170000
DESCRIPTION:Abstract: Reinforcement Learning (RL) has emerged as a promisin
 g paradigm for addressing sequential optimization problems when the dynami
 cs of the underlying systems are unknown. The primary objective in RL is t
 o learn a policy that maximizes expected future rewards\, or value functio
 ns. This is typically achieved through learning the optimal value function
 s or\, alternatively\, the optimal policy. The performance of RL algorithm
 s is often limited by the choice of models used\, which strongly depends o
 n the specific problem. However\, a common feature of many RL problems is 
 that the optimal value functions and policies tend to be low rank. Motivat
 ed by this observation\, this talk explores low-rank modeling as a general
  tool for RL problems. Specifically\, we demonstrate how low-rank matrix a
 nd tensor models can approximate both value functions and policies. Additi
 onally\, we show how low-rank models can be applied to alternative setups\
 , such as multi-task RL. This approach results in parsimonious algorithms 
 that balance the rapid convergence of simple linear models with the high r
 eward potential of neural networks.\n\nCo-sponsored by: Rice University EC
 E Department Seminar\n\nSpeaker(s): Antonio G. Marques\n\nAgenda: \nPresen
 tation at 4 to 5:00pm CST\n\nRoom: Room 1064\, Bldg: Duncan Hall\, Rice Un
 iversity\, 6100 Main Street\, Houston\, Texas\, United States\, 77005
LOCATION:Room: Room 1064\, Bldg: Duncan Hall\, Rice University\, 6100 Main 
 Street\, Houston\, Texas\, United States\, 77005
ORGANIZER:cavallar@rice.edu
SEQUENCE:29
SUMMARY:Tensor-low rank models for reinforcement learning
URL;VALUE=URI:https://events.vtools.ieee.org/m/444343
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Abstract: Reinforcement Learning (RL) has 
 emerged as a promising paradigm for&amp;nbsp\;addressing sequential optimizati
 on problems when the dynamics of the underlying&amp;nbsp\;systems are unknown.
  The primary objective in RL is to learn a policy that maximizes&amp;nbsp\;exp
 ected future rewards\, or value functions. This is typically achieved thro
 ugh&amp;nbsp\;learning the optimal value functions or\, alternatively\, the op
 timal policy. The&amp;nbsp\;performance of RL algorithms is often limited by t
 he choice of models used\, which&amp;nbsp\;strongly depends on the specific pr
 oblem. However\, a common feature of many RL&amp;nbsp\;problems is that the op
 timal value functions and policies tend to be low rank.&amp;nbsp\;Motivated by
  this observation\, this talk explores low-rank modeling as a general tool
 &amp;nbsp\;for RL problems. Specifically\, we demonstrate how low-rank matrix 
 and tensor&amp;nbsp\;models can approximate both value functions and policies.
  Additionally\, we show how&amp;nbsp\;low-rank models can be applied to altern
 ative setups\, such as multi-task RL. This&amp;nbsp\;approach results in parsi
 monious algorithms that balance the rapid convergence of&amp;nbsp\;simple line
 ar models with the high reward potential of neural networks.&lt;/p&gt;&lt;br /&gt;&lt;br 
 /&gt;Agenda: &lt;br /&gt;&lt;p&gt;Presentation at 4 to 5&lt;span class=&quot;lw_end_time&quot;&gt;:00pm&lt;/
 span&gt;&amp;nbsp\;&lt;span class=&quot;lw_cal_tz_abbrv tz_editable&quot;&gt;CST&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

