BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20220313T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20221106T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20220406T005417Z
UID:7281E09D-22F4-4731-925B-5B4B9F65E192
DTSTART;TZID=America/New_York:20220405T190000
DTEND;TZID=America/New_York:20220405T201500
DESCRIPTION:The Rockfish High Performance Computing Cluster went into produ
 ction in March 2021. Initially funded by an MRI grant from the National Sc
 ience Foundation (OAC1920103) with cost-sharing provided by the Johns Hopk
 ins Schools of Engineering and Arts and Science\, as well as from the prov
 ost office\, it replaces the Bluecrab compute cluster that went into produ
 ction July 2015. The MRI grant allows three institutions use of these reso
 urces\, Johns Hopkins University\, Morgan State University and NSF’s XSE
 DE. A “condominium business model” was developed so research groups (P
 Is) can add compute “condos” to the cluster\, share resources and incr
 ease the computational capability of the instrument. As of January 2022\, 
 the system has over 35\,000 cores 31 faculty-added condos with additional 
 coming soon. A commitment from several JHU Deans have created a sustainabi
 lity plan to fund multi-million dollar ‘technology refreshes’ on an on
 going basis. A faculty-staff oversight committee ensures that there is a c
 lose relationship between the system administrators\, vice Deans\, and fac
 ulty users. The net result is a successful adventure in shared governance 
 that provides impressive petascale computing resources that are shared in 
 a manner ensuring load-balancing and hence optimal usage with a personal t
 ouch that is not possible with larger but more impersonal national superco
 mputer facilities.\n\nSpeaker(s): Paulette Clancy\, Jaime Combariza\n\nAge
 nda: \n7:00 pm - Introductions\n\n7:05 pm - Presentation Begins\n\n8:05 pm
  - Open Q&amp;A\n\n8:15 pm - Close\n\nVirtual: https://events.vtools.ieee.org/
 m/308846
LOCATION:Virtual: https://events.vtools.ieee.org/m/308846
ORGANIZER:schulman@ieee.org
SEQUENCE:10
SUMMARY:Advanced Research Computing at Hopkins: An Adventure in Shared Gove
 rnance
URL;VALUE=URI:https://events.vtools.ieee.org/m/308846
X-ALT-DESC:Description: &lt;br /&gt;&lt;div class=&quot;page&quot; title=&quot;Page 1&quot;&gt;\n&lt;div class
 =&quot;layoutArea&quot;&gt;\n&lt;div class=&quot;column&quot;&gt;\n&lt;p&gt;&lt;span style=&quot;font-size: 12.000000
 pt\; font-family: &#39;Calibri&#39;\;&quot;&gt;The &lt;em&gt;Rockfish&lt;/em&gt; High Performance Comp
 uting Cluster went into production in March 2021.&amp;nbsp\; Initially funded 
 by an MRI grant from the National Science Foundation (OAC1920103) with cos
 t-sharing provided by the Johns Hopkins Schools of Engineering and Arts an
 d Science\, as well as from the provost office\, it replaces the &lt;em&gt;Bluec
 rab&lt;/em&gt; compute cluster that went into production July 2015.&amp;nbsp\; The M
 RI grant allows three institutions use of these resources\, Johns Hopkins 
 University\, Morgan State University and NSF&amp;rsquo\;s XSEDE. A &amp;ldquo\;con
 dominium business model&amp;rdquo\; was developed so research groups (PIs) can
  add compute &amp;ldquo\;condos&amp;rdquo\; to the cluster\, share resources and i
 ncrease the computational capability of the instrument. As of January 2022
 \, the system has over 35\,000 cores 31 faculty-added condos with addition
 al coming soon. A commitment from several JHU Deans have created a sustain
 ability plan to fund multi-million dollar &amp;lsquo\;technology refreshes&amp;rsq
 uo\; on an ongoing basis. A faculty-staff oversight committee ensures that
  there is a close relationship between the system administrators\, vice De
 ans\, and faculty users. The net result is a successful adventure in share
 d governance that provides impressive petascale computing resources that a
 re shared in a manner ensuring load-balancing and hence optimal usage with
  a personal touch that is not possible with larger but more impersonal nat
 ional supercomputer facilities.&lt;/span&gt;&lt;/p&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;&lt;br /&gt;&lt;b
 r /&gt;Agenda: &lt;br /&gt;&lt;p&gt;7:00 pm - Introductions&lt;/p&gt;\n&lt;p&gt;7:05 pm - Presentatio
 n Begins&lt;/p&gt;\n&lt;p&gt;8:05 pm - Open Q&amp;amp\;A&lt;/p&gt;\n&lt;p&gt;8:15 pm - Close&lt;/p&gt;
END:VEVENT
END:VCALENDAR

