Reinforcement Learning for Nonlinear Process Control and Optimization Under Uncertainties

#reinforcementlearning #STEM
Share

Abstract: In this presentation, I will provide an overview of reinforcement learning (RL) and focus on the value function approximation-based method for nonlinear process control and real time optimization (RTO) under uncertainties. In the first part, the traditional model predictive control (MPC) and RL are well-integrated to achieve offset-free control with fast online computations. In the second part, a data-driven RL scheme is specifically designed for determining the optimal operating conditions in large-scale and risk-sensitive systems, such as refinery, where operational risks should be minimized and revenue should be maximized. Several benchmark problems are solved to demonstrate the effectiveness of the proposed RL approaches.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 26 May 2023
  • Time: 01:00 PM to 02:00 PM
  • All times are (UTC-08:00) Pacific Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar

https://csulb.zoom.us/j/84563795616

Meeting ID: 845 6379 5616

  • Contact Event Host


  Speakers

Yu Yang

Topic:

Reinforcement Learning for Nonlinear Process Control and Optimization Under Uncertainties

In this presentation, I will provide an overview of reinforcement learning (RL) and focus on the value function approximation-based method for nonlinear process control and real time optimization (RTO) under uncertainties. In the first part, the traditional model predictive control (MPC) and RL are well-integrated to achieve offset-free control with fast online computations. In the second part, a data-driven RL scheme is specifically designed for determining the optimal operating conditions in large-scale and risk-sensitive systems, such as refinery, where operational risks should be minimized and revenue should be maximized. Several benchmark problems are solved to demonstrate the effectiveness of the proposed RL approaches.

Biography:

Yu Yang received the Ph.D. in Chemical Engineering from University of Alberta in 2011 and worked as a postdoctoral scholar at University of Alberta and Massachusetts Institute of Technology from 2011 to 2015. His research focuses on the application of process control and optimization theories for large-scale and complex chemical processes and systems. His research interests include experimental and computational approaches for energy and process systems design, simulation, modeling, control, and optimization. Currently, he is an assistant professor of the Chemical Engineering department, California State University, Long Beach.

Address:United States