BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250615T170136Z
UID:F20C33A7-C9F0-48DF-8183-E62EC4B16761
DTSTART;TZID=America/Los_Angeles:20250613T130000
DTEND;TZID=America/Los_Angeles:20250613T170000
DESCRIPTION:1-3pm\, Tutorial on Circuits and Systems for Artificial Intelli
 gence: Architecture and algorithms focusing on spiking neural nets and tra
 nsformers\n\n3:30-5pm\, Distinguished Lecture and INC Computational Neuros
 cience Seminar: &quot;Brain-Inspired Low-Power Language Models&quot;\nThis talk unve
 ils the transformative potential of achieving sub-10-watt language models 
 (LMs) by drawing inspiration from the brain’s energy efficiency. We intr
 oduce a groundbreaking approach to language model design\, featuring a mat
 rix-multiplication-free architecture that scales to billions of parameters
 . To validate this paradigm\, we developed custom hardware solutions (FPGA
 ) as well as leveraged pre-existing neuromorphic hardware (Intel Loihi 2)\
 , optimized for lightweight operations that outperform traditional GPU cap
 abilities. Our system achieves human-surpassing throughput on billion-para
 meter models at just 13 watts\, setting a new benchmark for energy-efficie
 nt AI. This work not only redefines what&#39;s possible for low-power LLMs but
  also highlights the critical operations future accelerators must prioriti
 ze to enable the next wave of sustainable AI innovation.\n\nCo-sponsored b
 y: UCSD EMBS/CAS Student Chapter\n\nSpeaker(s): Dr. Jason Eshraghian\, \n\
 nAgenda: \n1-3pm\, Tutorial on Circuits and Systems for Artificial Intelli
 gence: Architecture and algorithms focusing on spiking neural nets and tra
 nsformers\n3:30-5pm\, Distinguished Lecture and INC Computational Neurosci
 ence Seminar: &quot;Brain-Inspired Low-Power Language Models&quot;\n\nBldg: Halicio
 ğlu Data Science Institute (HDSI)\, Multipurpose Room 123\, UC San Diego\
 , San Diego\, California\, United States\, Virtual: https://events.vtools.
 ieee.org/m/487924
LOCATION:Bldg: Halicioğlu Data Science Institute (HDSI)\, Multipurpose Roo
 m 123\, UC San Diego\, San Diego\, California\, United States\, Virtual: h
 ttps://events.vtools.ieee.org/m/487924
ORGANIZER:gcauwenberghs@ucsd.edu
SEQUENCE:17
SUMMARY:Professor Jason Eshraghian\, UC Santa Cruz\, and IEEE CASS/EMBS Dis
 tinguished Lectures
URL;VALUE=URI:https://events.vtools.ieee.org/m/487924
X-ALT-DESC:Description: &lt;br /&gt;&lt;div&gt;1-3pm\, Tutorial on Circuits and Systems
  for Artificial Intelligence: Architecture and algorithms focusing on spik
 ing neural nets and transformers&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;3:30-5pm\
 , Distinguished Lecture and INC Computational Neuroscience Seminar: &quot;Brain
 -Inspired Low-Power Language Models&quot;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;This talk unveils 
 the transformative potential of achieving sub-10-watt language models (LMs
 ) by drawing inspiration from the brain&amp;rsquo\;s energy efficiency. We int
 roduce a groundbreaking approach to language model design\, featuring a ma
 trix-multiplication-free architecture that scales to billions of parameter
 s. To validate this paradigm\, we developed custom hardware solutions (FPG
 A) as well as leveraged pre-existing neuromorphic hardware (Intel Loihi 2)
 \, optimized for lightweight operations that outperform traditional GPU ca
 pabilities. Our system achieves human-surpassing throughput on billion-par
 ameter models at just 13 watts\, setting a new benchmark for energy-effici
 ent AI. This work not only redefines what&#39;s possible for low-power LLMs bu
 t also highlights the critical operations future accelerators must priorit
 ize to enable the next wave of sustainable AI innovation.&lt;/div&gt;&lt;br /&gt;&lt;br /
 &gt;Agenda: &lt;br /&gt;&lt;div&gt;1-3pm\, Tutorial on Circuits and Systems for Artificia
 l Intelligence: Architecture and algorithms focusing on spiking neural net
 s and transformers&lt;/div&gt;\n&lt;div&gt;3:30-5pm\, Distinguished Lecture and INC Co
 mputational Neuroscience Seminar: &quot;Brain-Inspired Low-Power Language Model
 s&quot;&amp;nbsp\;&lt;/div&gt;
END:VEVENT
END:VCALENDAR

