BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
DTSTART:20260329T020000
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:BST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20251026T010000
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:GMT
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251112T194549Z
UID:648E29E1-2600-4FE9-9C3A-A7A30ECD841C
DTSTART;TZID=Europe/London:20251112T133000
DTEND;TZID=Europe/London:20251112T143000
DESCRIPTION:The growing integration of large language models across profess
 ional domains transforms how experts make critical decisions in healthcare
 \, education\, and law. While significant research effort focuses on getti
 ng these systems to communicate their outputs with probabilistic measures 
 of reliability\, many consequential forms of uncertainty in professional c
 ontexts resist such quantification. A physician pondering the appropriaten
 ess of documenting possible domestic abuse\, a teacher assessing cultural 
 sensitivity\, or a mathematician distinguishing procedural from conceptual
  understanding face forms of uncertainty that cannot be reduced to percent
 ages. This paper argues for moving beyond simple quantification toward ric
 her expressions of uncertainty essential for beneficial AI integration. We
  propose participatory refinement processes through which professional com
 munities collectively shape how different forms of uncertainty are communi
 cated. Our approach acknowledges that uncertainty expression is a form of 
 professional sense-making that requires collective development rather than
  algorithmic optimization.\n\nCo-sponsored by: Ulster University\n\nSpeake
 r(s): Sylvie\, \n\nVirtual: https://events.vtools.ieee.org/m/505621
LOCATION:Virtual: https://events.vtools.ieee.org/m/505621
ORGANIZER:h.zheng@ulster.ac.uk
SEQUENCE:11
SUMMARY:Beyond Quantification: Navigating Uncertainty in Professional AI Sy
 stems
URL;VALUE=URI:https://events.vtools.ieee.org/m/505621
X-ALT-DESC:Description: &lt;br /&gt;&lt;p style=&quot;text-align: justify\;&quot;&gt;&lt;span style=
 &quot;color: rgb(51\, 51\, 51)\; font-family: Garamond\, Helvetica\, serif\; fo
 nt-size: 18.75px\; font-style: normal\; font-variant-ligatures: normal\; f
 ont-variant-caps: normal\; font-weight: 300\; letter-spacing: normal\; orp
 hans: 2\; text-align: start\; text-indent: 0px\; text-transform: none\; wi
 dows: 2\; word-spacing: 0px\; -webkit-text-stroke-width: 0px\; white-space
 : normal\; text-decoration-thickness: initial\; text-decoration-style: ini
 tial\; text-decoration-color: initial\; display: inline !important\; float
 : none\;&quot;&gt;The growing integration of large language models across professi
 onal domains transforms how experts make critical decisions in healthcare\
 , education\, and law. While significant research effort focuses on gettin
 g these systems to communicate their outputs with probabilistic measures o
 f reliability\, many consequential forms of uncertainty in professional co
 ntexts resist such quantification. A physician pondering the appropriatene
 ss of documenting possible domestic abuse\, a teacher assessing cultural s
 ensitivity\, or a mathematician distinguishing procedural from conceptual 
 understanding face forms of uncertainty that cannot be reduced to percenta
 ges. This paper argues for moving beyond simple quantification toward rich
 er expressions of uncertainty essential for beneficial AI integration. We 
 propose participatory refinement processes through which professional comm
 unities collectively shape how different forms of uncertainty are communic
 ated. Our approach acknowledges that uncertainty expression is a form of p
 rofessional sense-making that requires collective development rather than 
 algorithmic optimization.&lt;/span&gt;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

