BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20250309T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251021T014010Z
UID:A5A5FA01-B217-46F4-A721-6A5DCCCFF65C
DTSTART;TZID=America/Chicago:20251020T193000
DTEND;TZID=America/Chicago:20251020T203000
DESCRIPTION:Abstract:\nWith advancements in AI\, new gaze estimation method
 s are exceeding state-of-the-art (SOTA) benchmarks\, but their real-world 
 application reveals a gap with commercial eye-tracking\nsolutions. Factors
  like model size\, inference time\, and privacy often go unaddressed. Mean
 while\, webcam-based eye-tracking methods lack sufficient accuracy\, in pa
 rticular due to head movement. To tackle these issues\, we introduce WebEy
 eTrack\, a framework that integrates lightweight SOTA gaze estimation mode
 ls directly in the browser. It incorporates model-based head pose estimati
 on and on-device few-shot learning with as few as nine calibration samples
  (k ≤ 9). WebEyeTrack adapts to new users\, achieving SOTA performance w
 ith an error margin of 2.32 cm on GazeCapture and real-time inference spee
 ds of 2.4 milliseconds on an iPhone 14. Our open-source code is available 
 at https://github.com/RedForestAi/WebEyeTrack\n\nBio: Dr. Yike Zhang is a 
 computer scientist with a research focus on LLM\, Computer Vision\, 6D Pos
 e Estimation\, and surgical navigation systems. She recently completed her
  Ph.D. in Computer Science at Vanderbilt University\, where she developed 
 a deep-learning-based navigation system for image-guided cochlear implant 
 surgery. Her work bridges machine learning and medical imaging processing\
 , aiming to improve surgical accuracy\, safety\, and future clinical trans
 lation with real-time image analysis and intraoperative navigation tools. 
 You may find more information about her at https://yikezhang.me/.\n\nRoom:
  BSIC 203 - Data Science and Machine Learning Lab\, Bldg: Blank Sheppard I
 nnovation Center (next to building #22 in the map)\, One Camino Santa Mari
 a\, St. Mary&#39;s University of San Antonio\, San Antonio\, Texas\, United St
 ates\, 78228
LOCATION:Room: BSIC 203 - Data Science and Machine Learning Lab\, Bldg: Bla
 nk Sheppard Innovation Center (next to building #22 in the map)\, One Cami
 no Santa Maria\, St. Mary&#39;s University of San Antonio\, San Antonio\, Texa
 s\, United States\, 78228
ORGANIZER:wluo@stmarytx.edu
SEQUENCE:80
SUMMARY:WEBEYETRACK: Scalable Eye-Tracking for the Browser via On-Device Fe
 w-Shot Personalization
URL;VALUE=URI:https://events.vtools.ieee.org/m/499763
X-ALT-DESC:Description: &lt;br /&gt;&lt;div&gt;&lt;strong data-olk-copy-source=&quot;MessageBod
 y&quot;&gt;Abstract:&lt;/strong&gt;&amp;nbsp\;\n&lt;div data-olk-copy-source=&quot;MessageBody&quot;&gt;With
  advancements in AI\, new gaze estimation methods are exceeding state-of-t
 he-art (SOTA) benchmarks\, but their real-world application reveals a gap 
 with commercial eye-tracking&lt;/div&gt;\n&lt;div&gt;solutions. Factors like model siz
 e\, inference time\, and privacy often go unaddressed. Meanwhile\, webcam-
 based eye-tracking methods lack sufficient accuracy\, in particular due to
  head movement. To tackle these issues\, we introduce WebEyeTrack\, a fram
 ework that integrates lightweight SOTA gaze estimation models directly in 
 the browser. It incorporates model-based head pose estimation and on-devic
 e few-shot learning with as few as nine calibration samples (k &amp;le\; 9). W
 ebEyeTrack adapts to new users\, achieving SOTA performance with an error 
 margin of 2.32 cm on GazeCapture and real-time inference speeds of 2.4 mil
 liseconds on an iPhone 14. Our open-source code is available at&amp;nbsp\;&lt;a c
 lass=&quot;ms-outlook-linkify&quot; title=&quot;https://github.com/RedForestAi/WebEyeTrac
 k&quot; href=&quot;https://github.com/RedForestAi/WebEyeTrack&quot; target=&quot;_blank&quot; rel=&quot;
 noopener&quot; data-linkindex=&quot;1&quot;&gt;https://github.com/RedForestAi/WebEyeTrack&lt;/a
 &gt;&lt;/div&gt;\n&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;&lt;strong&gt;Bio:&lt;/strong&gt; Dr. Yike Z
 hang is a computer scientist with a research focus on &lt;code class=&quot;languag
 e-plaintext highlighter-rouge&quot;&gt;LLM&lt;/code&gt;\,&amp;nbsp\;&lt;code class=&quot;language-pl
 aintext highlighter-rouge&quot;&gt;Computer Vision&lt;/code&gt;\, 6D Pose Estimation\, a
 nd surgical navigation systems. She recently completed her Ph.D. in Comput
 er Science at Vanderbilt University\, where she developed a deep-learning-
 based navigation system for image-guided cochlear implant surgery. Her wor
 k bridges machine learning and medical imaging processing\, aiming to impr
 ove surgical accuracy\, safety\, and future clinical translation with real
 -time image analysis and intraoperative navigation tools. You may find mor
 e information about her at &lt;a href=&quot;https://yikezhang.me/&quot;&gt;https://yikezha
 ng.me/&lt;/a&gt;.&lt;/div&gt;
END:VEVENT
END:VCALENDAR

