BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230526T235839Z
UID:A7D76979-9A33-4B59-A94E-4B38A29A765F
DTSTART;TZID=America/Los_Angeles:20230526T130000
DTEND;TZID=America/Los_Angeles:20230526T140000
DESCRIPTION:Abstract: In this presentation\, I will provide an overview of 
 reinforcement learning (RL) and focus on the value function approximation-
 based method for nonlinear process control and real time optimization (RTO
 ) under uncertainties. In the first part\, the traditional model predictiv
 e control (MPC) and RL are well-integrated to achieve offset-free control 
 with fast online computations. In the second part\, a data-driven RL schem
 e is specifically designed for determining the optimal operating condition
 s in large-scale and risk-sensitive systems\, such as refinery\, where ope
 rational risks should be minimized and revenue should be maximized. Severa
 l benchmark problems are solved to demonstrate the effectiveness of the pr
 oposed RL approaches.\n\nSpeaker(s): Yu Yang\, \n\nVirtual: https://events
 .vtools.ieee.org/m/361972
LOCATION:Virtual: https://events.vtools.ieee.org/m/361972
ORGANIZER:henry.yeh@csulb.edu
SEQUENCE:1
SUMMARY: Reinforcement Learning for Nonlinear Process Control and Optimizat
 ion Under Uncertainties
URL;VALUE=URI:https://events.vtools.ieee.org/m/361972
X-ALT-DESC:Description: &lt;br /&gt;&lt;div class=&quot;page&quot; title=&quot;Page 1&quot;&gt;\n&lt;div class
 =&quot;layoutArea&quot;&gt;\n&lt;div class=&quot;column&quot;&gt;\n&lt;p&gt;Abstract: In this presentation\, 
 I will provide an overview of reinforcement learning (RL) and focus on the
  value function approximation-based method for nonlinear process control a
 nd real time optimization (RTO) under uncertainties. In the first part\, t
 he traditional model predictive control (MPC) and RL are well-integrated t
 o achieve offset-free control with fast online computations. In the second
  part\, a data-driven RL scheme is specifically designed for determining t
 he optimal operating conditions in large-scale and risk-sensitive systems\
 , such as refinery\, where operational risks should be minimized and reven
 ue should be maximized. Several benchmark problems are solved to demonstra
 te the effectiveness of the proposed RL approaches.&lt;/p&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;
 /div&gt;
END:VEVENT
END:VCALENDAR

