BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
DTSTART:20260308T030000
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:CDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251102T010000
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:CST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251118T025146Z
UID:A3F55D40-15F9-4055-A8C5-E8F47A43DA76
DTSTART;TZID=America/Chicago:20251117T193000
DTEND;TZID=America/Chicago:20251117T203000
DESCRIPTION:Abstract:\nReading assessments are essential for enhancing stud
 ents&#39; comprehension\; yet\, many EdTech applications focus mainly on outco
 me-based metrics\, providing limited insights into students&#39; reading behav
 iors and cognition. This study investigates the use of multimodal data tha
 t includes eye-tracking data\, along with learning outcomes\, assessment c
 ontent\, and teaching standards to derive meaningful reading insights. We 
 employ unsupervised learning techniques to identify distinct reading behav
 ior patterns. A large language model (LLM) then synthesizes the derived in
 formation into actionable reports for educators\, streamlining the interpr
 etation process. LLM experts and human educators evaluated these reports f
 or clarity\, accuracy\, relevance\, and pedagogical usefulness. Our findin
 gs indicate that LLMs can effectively function as educational analysts\, t
 urning diverse data into teacher-friendly insights that educators find ben
 eficial. While automated insight generation shows promise\, human oversigh
 t remains crucial to ensure reliability and fairness. This research advanc
 es human-centered AI in education\, connecting data-driven analytics with 
 practical classroom applications.\n\nBio: Dr. Eduardo Davalos is an Assist
 ant Professor at Trinity University\, working at the intersection of AI in
  Education (AIED)\, Human–Computer Interaction (HCI)\, and Large Languag
 e Models (LLMs). His research develops privacy‑preserving\, browser‑na
 tive sensing and modeling techniques that translate into scalable learning
  technologies. His PhD is in Computer Science from Vanderbilt University\,
  where he and my team developed RedForest\, a e-learning platform that inc
 orporates AI to assist teacher workflows\, including assessment creation a
 nd other aspects such as gaze analytics and collaborate learning/play. His
  latest focus is on finding ways to incorporate AI agents to meaningfully 
 assist teachers and students by providing more personalize feedback\, sugg
 estions\, and content.\n\nRoom: BSIC 203 - Data Science and Machine Learni
 ng Lab\, Bldg: Blank Sheppard Innovation Center (next to building #22 in t
 he map)\, One Camino Santa Maria\, St. Mary&#39;s University of San Antonio\, 
 San Antonio\, Texas\, United States\, 78228
LOCATION:Room: BSIC 203 - Data Science and Machine Learning Lab\, Bldg: Bla
 nk Sheppard Innovation Center (next to building #22 in the map)\, One Cami
 no Santa Maria\, St. Mary&#39;s University of San Antonio\, San Antonio\, Texa
 s\, United States\, 78228
ORGANIZER:wluo@stmarytx.edu
SEQUENCE:5
SUMMARY:LLMs as Educational Analysts: Transforming Multimodal Data Traces i
 nto Actionable Reading Assessment Reports
URL;VALUE=URI:https://events.vtools.ieee.org/m/499770
X-ALT-DESC:Description: &lt;br /&gt;&lt;div&gt;&lt;strong data-olk-copy-source=&quot;MessageBod
 y&quot;&gt;Abstract:&lt;/strong&gt;&amp;nbsp\;\n&lt;div data-olk-copy-source=&quot;MessageBody&quot;&gt;&lt;spa
 n data-olk-copy-source=&quot;MessageBody&quot;&gt;Reading assessments are essential for
  enhancing students&#39; comprehension\; yet\, many EdTech applications focus 
 mainly on outcome-based metrics\, providing limited insights into students
 &#39; reading behaviors and cognition. This study investigates the use of mult
 imodal data that includes eye-tracking data\, along with learning outcomes
 \, assessment content\, and teaching standards to derive meaningful readin
 g insights. We employ unsupervised learning techniques to identify distinc
 t reading behavior patterns. A large language model (LLM) then synthesizes
  the derived information into actionable reports for educators\, streamlin
 ing the interpretation process. LLM experts and human educators evaluated 
 these reports for clarity\, accuracy\, relevance\, and pedagogical usefuln
 ess. Our findings indicate that LLMs can effectively function as education
 al analysts\, turning diverse data into teacher-friendly insights that edu
 cators find beneficial. While automated insight generation shows promise\,
  human oversight remains crucial to ensure reliability and fairness. This 
 research advances human-centered AI in education\, connecting data-driven 
 analytics with practical classroom applications.&lt;/span&gt;&lt;/div&gt;\n&lt;/div&gt;\n&lt;di
 v&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;&lt;strong&gt;Bio:&lt;/strong&gt; Dr. &lt;span data-olk-copy-source
 =&quot;MessageBody&quot;&gt;Eduardo Davalos is&lt;/span&gt;&lt;span data-olk-copy-source=&quot;Messag
 eBody&quot;&gt; an Assistant Professor at Trinity University\, working at the inte
 rsection of AI in Education (AIED)\, Human&amp;ndash\;Computer Interaction (HC
 I)\, and Large Language Models (LLMs). His research develops privacy‑pre
 serving\, browser‑native sensing and modeling techniques that translate 
 into scalable learning technologies. His PhD is in Computer Science from V
 anderbilt University\, where he and my team developed RedForest\, a e-lear
 ning platform that incorporates AI to assist teacher workflows\, including
  assessment creation and other aspects such as gaze analytics and collabor
 ate learning/play. His latest focus is on finding ways to incorporate AI a
 gents to meaningfully assist teachers and students by providing more perso
 nalize feedback\, suggestions\, and content.&lt;/span&gt;&lt;/div&gt;
END:VEVENT
END:VCALENDAR

