Lifelong Learning – Temporal Aspects and Theoretical Foundations

#STEM
Share

Seminar: Lifelong Learning – Temporal Aspects and Theoretical Foundations

Speaher: Hava Siegelmann, PhD
Provost Professor of Computer Science,
Director of the Biologically Inspired
Neural and Dynamical Systems (BINDS) Laboratory,
University of Massachusetts Amherst

Time/date: Thursday, March 9th
                    12:00-1:00 pm EST
                    George Mason Fairfax Campus:
                    Horizon Hall, Rm 2008
                    Live streaming to SciTech Campus: KJH 258 

Abstract
Lifelong Learning is the cutting edge of artificial intelligence - encompassing
computational methods that allow systems to learn in runtime and incorporate
learning for application in new, unanticipated situations. Until recently, this sort of
computation has been found exclusively in nature; thus, Lifelong Learning looks
to nature for its underlying principles and mechanisms and then transfer them to
this new technology. Yet SOTA Lifelong Learning, like classical learning, is limited
in its accuracy for temporal prediction from short and incomplete measurements.
This is where our new technology appears. The technology is based on a new
type of neural networks, where much like the brain, the connections between
neurons are no longer scalar numbers, but rather temporal functions. This gives
the networks an unparallel capacity, strong temporal accuracy, and ability to
keep effectivity even when most measurements are lost. Interestingly, our
temporally changing network while more capable, is smaller in size and
consumes significantly less power. A version used for Reinforcement learning and
control is under development. We will also introduce the forward propagation
algorithm which my lab pioneers. Computational foundations are required to
enable new computation to evolve. While Turing computation worked for us till
now very well, it does not well describe such machines that change regularly.
Indeed Turing spent most of his research time, immediately after introducing the
logical universal model, in an effort to find a new model that will more closely
simulate the brains and will learn more like them. Super-Turing computation may
be an answer to Turing’s quest, and since it is built on every learning machine, it
has become the foundational theory of Lifelong Learning, towards a stronger AI.

Biography
Dr. Siegelmann is a professor of Computer Science, Core Member of the
Neuroscience and Behavior Program, and director of the Biologically Inspired
Neural and Dynamical Systems (BINDS) Laboratory. Siegelmann recently
completed her term as a DARPA PM: “L2M,” one of her key initiatives,
inaugurated “third-wave AI,” pushing major design innovation and a dramatic
increase in AI capability. “GARD” is leading to unique advancements in assuring
AI robustness against attack. “CSL” is introducing powerful methods of combined
learning and information sharing on AI platforms without revealing private data.
Other programs include advanced biomedical applications. Siegelmann
conducts highly interdisciplinary research in next generation machine learning,
neural networks, intelligent machine-human collaboration, computational
studies of the brain - with application to AI, data science and
industrial/government /biomedical applications. Among her contributions are
the Support Vector Clustering algorithm, delineating jet-lag mechanisms,
identifying brain structure that leads to abstract thoughts, and Super-Turing
theory which has become the backbone of the latest generation of biologically
inspired neural networks and lifelong learning machines. Dr. Siegelmann is a
leader in increasing awareness of ethical AI via the IEEE, INNS and international
meetings, and is particularly active in supporting minorities and women in STEM
nationally and internationally currently serving as the Chair of the women’s
chapter of the International Neural Networks Society. Siegelmann has been a
visiting professor at MIT, Harvard University, the Weizmann Institute, ETH, the Salk
Institute, Mathematical Science Research Institute Berkeley, and the Newton
Institute Cambridge University. She is the former PM for L2M, DARPA’s largest
advanced AI initiative, as well as other major DARPA programs. She was the
recipient of the Alon Fellowship of Excellence, the NSF-NIH Obama Presidential
BRAIN Initiative award, the Donald O. Hebb Award of the International Neural
Network Society for “contribution to biological learning”; she was named named
IEEE fellow and Distinguished Lecturer of the IEEE Computational Intelligence
Society, and INNS fellow. She received the DARPA Meritorious Public Service
award.  

 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 09 Mar 2023
  • Time: 12:00 PM to 01:00 PM
  • All times are (UTC-05:00) Eastern Time (US & Canada)
  • Add_To_Calendar_icon Add Event to Calendar
  • George Mason University - Fairfax campus
  • Fairfax, Virginia
  • United States 22030
  • Building: Horizon Hall
  • Room Number: 2008

  • Contact Event Host