BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20230312T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20231105T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230825T230242Z
UID:0DF23FA3-F089-4864-9DC5-D0AE8E7AB68C
DTSTART;TZID=US/Eastern:20230825T180000
DTEND;TZID=US/Eastern:20230825T190000
DESCRIPTION:The [IEEE Long Island (LI) Signal Processing Society (SPS)](htt
 ps://ieee.li/society-chapters/signal-processing-society-sp/) in collaborat
 ion with [North Jersey Social Implications of Technology Society](https://
 r1.ieee.org/northjersey/chapter/ssit/) presents the following Technical Le
 cture:\n\nAccurate image segmentation holds significance for vital clinica
 l applications such as diagnosis and surgery planning. While deep neural n
 etworks have excelled in achieving superior segmentation outcomes via full
 y supervised learning\, their reliance on substantial annotated training d
 ata is a challenge. Procuring extensive labeled datasets for medical image
 s is labor-intensive and costly due to the need for clinical expertise in 
 annotations. Thus\, an opportunity for improvement is evident. Hence\, the
  critical need to devise strategies for attaining medical images with scan
 t annotations while harnessing untapped potential within unlabeled data du
 ring training. We harness the power of self-supervised representation lear
 ning and semi-supervised learning in this regard and perform extensive exp
 eriments on images from multiple modalities: Computer Tomograhpy (CT) scan
 \, Magnetic Resonance Imaging (MRI) scan\, Histopathology studies\, etc. O
 ur recent research showcases that even with minimal annotations estimate o
 f x&lt;10%\, we achieve comparable or superior performance compared to fully 
 supervised approaches.\n\nSpeaker(s): Mr. Hritam Basak\, Student\, \n\nAge
 nda: \nTechnical support set-up: 5:30pm EST\nIntroductions 6pm-6:05pm EST\
 nTechnical Lecture: 6:05pm-6:50pm EST\nQ&amp;A: 6:50pm-7pm EST\n\nVirtual: htt
 ps://events.vtools.ieee.org/m/369575
LOCATION:Virtual: https://events.vtools.ieee.org/m/369575
ORGANIZER:Signal@ieee.li
SEQUENCE:99
SUMMARY:Maximizing Learning with Minimal Labels: Innovations in Medical Ima
 ge Analysis with Sparse Labels
URL;VALUE=URI:https://events.vtools.ieee.org/m/369575
X-ALT-DESC:Description: &lt;br /&gt;&lt;p style=&quot;font-weight: 400\;&quot;&gt;The &lt;a href=&quot;ht
 tps://ieee.li/society-chapters/signal-processing-society-sp/&quot;&gt;IEEE Long Is
 land (LI) Signal Processing Society (SPS)&lt;/a&gt; in collaboration with &lt;a hre
 f=&quot;https://r1.ieee.org/northjersey/chapter/ssit/&quot;&gt;North Jersey Social Impl
 ications of Technology Society&lt;/a&gt; presents the following Technical Lectur
 e:&lt;/p&gt;\n&lt;p style=&quot;font-weight: 400\;&quot;&gt;Accurate image segmentation holds si
 gnificance for vital clinical applications such as diagnosis and surgery p
 lanning. While deep neural networks have excelled in achieving superior se
 gmentation outcomes via fully supervised learning\, their reliance on subs
 tantial annotated training data is a challenge. Procuring extensive labele
 d datasets for medical images is labor-intensive and costly due to the nee
 d for clinical expertise in annotations. Thus\, an opportunity for improve
 ment is evident. Hence\, the critical need to devise strategies for attain
 ing medical images with scant annotations while harnessing untapped potent
 ial within unlabeled data during training. We harness the power of self-su
 pervised representation learning and semi-supervised learning in this rega
 rd and perform extensive experiments on images from multiple modalities: &amp;
 nbsp\;Computer Tomograhpy (CT) scan\, Magnetic Resonance Imaging (MRI) sca
 n\, Histopathology studies\, etc. Our recent research showcases that even 
 with minimal annotations estimate of x&amp;lt\;10%\, we achieve comparable or 
 superior performance compared to fully supervised approaches.&lt;/p&gt;&lt;br /&gt;&lt;br
  /&gt;Agenda: &lt;br /&gt;&lt;p&gt;Technical support set-up: 5:30pm EST&lt;br /&gt;Introduction
 s 6pm-6:05pm EST&lt;br /&gt;Technical Lecture: 6:05pm-6:50pm EST&lt;br /&gt;Q&amp;amp\;A: 
 6:50pm-7pm EST&lt;/p&gt;
END:VEVENT
END:VCALENDAR

