BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20240310T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20241103T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240716T184241Z
UID:4EB6C938-ED20-45B1-BE60-53839E5D18D2
DTSTART;TZID=America/Chicago:20240715T150000
DTEND;TZID=America/Chicago:20240715T160000
DESCRIPTION:Abstract:\n\nModel-free and data-driven deep reinforcement lear
 ning (DRL) framework can develop an intelligent controller that can exploi
 t information to optimally schedule the energy hub with the aim of minimiz
 ing energy costs and emissions. By posing the energy hub scheduling proble
 m as a multi-dimensional continuous state and action space\, this method c
 an lead to a more efficient operation by considering nonlinear physical ch
 aracteristics of the energy hub components like nonconvex feasible operati
 ng regions of combined heat and power (CHP) units\, valve-point effects of
  power-only units\, and fuel cell dynamic efficiency. Moreover\, to provid
 e great potential for the DDPG agent to learn an optimal policy in an effi
 cient way\, a hybrid forecasting model based on convolutional neural netwo
 rks (CNNs) and bidirectional long short-term memories (BLSTMs) is develope
 d to overcome the risk associated with PV power generation that can be hig
 hly intermittent\, particularly on cloudy days. The effectiveness and appl
 icability of the proposed scheduling framework in reducing energy costs an
 d emissions while coping with uncertainties are demonstrated by comparing 
 it against conventional robust optimization and stochastic programming app
 roaches as well as state-of-the-art DRL methods in different case studies.
 \n\nSpeaker(s): Dr. Hussein Abdeltawab\n\nVirtual: https://events.vtools.i
 eee.org/m/424304
LOCATION:Virtual: https://events.vtools.ieee.org/m/424304
ORGANIZER:akankshawankhade@ieee.org
SEQUENCE:20
SUMMARY:Deep Reinforcement Learning Framework for Energy Management of Ener
 gy Hubs
URL;VALUE=URI:https://events.vtools.ieee.org/m/424304
X-ALT-DESC:Description: &lt;br /&gt;&lt;p class=&quot;MsoNormal&quot;&gt;&lt;strong&gt;&lt;span style=&quot;col
 or: rgb(0\, 0\, 0)\;&quot;&gt;Abstract:&lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;\n&lt;p class=&quot;MsoNormal&quot;&gt;
 Model-free and data-driven deep reinforcement learning (DRL) framework can
  develop an intelligent controller that can exploit information to optimal
 ly schedule the energy hub with the aim of minimizing energy costs and emi
 ssions. By posing the energy hub scheduling problem as a multi-dimensional
  continuous state and action space\, this method can lead to a more effici
 ent operation by considering nonlinear physical characteristics of the ene
 rgy hub components like nonconvex feasible operating regions of combined h
 eat and power (CHP) units\, valve-point effects of power-only units\, and 
 fuel cell dynamic efficiency. Moreover\, to provide great potential for th
 e DDPG agent to learn an optimal policy in an efficient way\, a hybrid for
 ecasting model based on convolutional neural networks (CNNs) and bidirecti
 onal long short-term memories (BLSTMs) is developed to overcome the risk a
 ssociated with PV power generation that can be highly intermittent\, parti
 cularly on cloudy days. The effectiveness and applicability of the propose
 d scheduling framework in reducing energy costs and emissions while coping
  with uncertainties are demonstrated by comparing it against conventional 
 robust optimization and stochastic programming approaches as well as state
 -of-the-art DRL methods in different case studies.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

