BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:19451014T230000
TZOFFSETFROM:+0630
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20200914T151258Z
UID:3721B0C0-BE52-4C23-A42B-C0A54C32FB21
DTSTART;TZID=Asia/Kolkata:20200912T100000
DTEND;TZID=Asia/Kolkata:20200912T124500
DESCRIPTION:A 3D FCN based Brain Tumor Segmentation for Overall Survival Pr
 ediction\, is a research based project presented by Dr. Rupal R. Agravat a
 nd Dr. Mehul S. Raval. This model basically deals with the segmentation of
  brain tumors at an early stage\, due to which the diagnosis of the same c
 an lead to a proper treatment. Out of all types of brain tumors\, Glioma i
 s one of the most life-threatening brain tumors. The brain tumor treatment
  planning is highly dependent on the accurate tumor sub-component segmenta
 tion\, but due to heterogeneous nature of Glioma the segmentation task bec
 omes difficult. The segmentation results fed into RFR to predict the overa
 ll survival of the patients. The RFR was trained on the age\, shape and vo
 lumetric features extracted from the ground truth provided with the traini
 ng dataset. The dataset contains all the images that has been segmented ma
 nually\, by one to four raters. Features like age\, survival days\, and re
 section status for 237 HGG scans are provided separately for Overall Survi
 val (OS).\nTo sum up the above process\, there is a proposed method which 
 contains majorly two tasks:\nTumor Segmentation and Survival Prediction.\n
 Further\, there are the implementation details as pre-processing\, trainin
 g and post-processing. Pre-processing boosts network training and improves
  the performance. All four modality images are bias field corrected. The n
 etwork is trained on entire training image dataset. The network uses the c
 ombination of two different loss functions namely\, dice loss function and
  focal loss function. The prediction of single image takes around one minu
 te. Finally\, post-processing includes conversion of the enhancing tumor i
 nto necrotic. Later\, the parameters like sensitivity and specificity are 
 recorded in two datasets\; Training dataset and Validation Dataset.\nIn th
 e conclusion\, the proposal basically uses three-layer deep U-net based en
 coder-decoder architecture for the segmentation. Each layer of the encodin
 g side incorporates dense modules and decoding side convolution modules. L
 ater\, network segmentation for cases with GTR tests RFR for overall survi
 val prediction.\n\nSpeaker(s): Dr. Mehul Raval\, \n\nAhmedabad\, Gujarat\,
  India\, Virtual: https://events.vtools.ieee.org/m/239716
LOCATION:Ahmedabad\, Gujarat\, India\, Virtual: https://events.vtools.ieee.
 org/m/239716
ORGANIZER:ieee.charusatsb@gmail.com
SEQUENCE:2
SUMMARY:3D semantic segmentation of brain tumour for overall survival predi
 ction
URL;VALUE=URI:https://events.vtools.ieee.org/m/239716
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;A 3D FCN based Brain Tumor Segmentation fo
 r Overall Survival Prediction\, is a research based project presented by D
 r. Rupal R. Agravat and Dr. Mehul S. Raval. This model basically deals wit
 h the segmentation of brain tumors at an early stage\, due to which the di
 agnosis of the same can lead to a proper treatment. Out of all types of br
 ain tumors\, Glioma is one of the most life-threatening brain tumors. The 
 brain tumor treatment planning is highly dependent on the accurate tumor s
 ub-component segmentation\, but due to heterogeneous nature of Glioma the 
 segmentation task becomes difficult. The segmentation results fed into RFR
  to predict the overall survival of the patients. The RFR was trained on t
 he age\, shape and volumetric features extracted from the ground truth pro
 vided with the training dataset. The dataset contains all the images that 
 has been segmented manually\, by one to four raters. Features like age\, s
 urvival days\, and resection status for 237 HGG scans are provided separat
 ely for Overall Survival (OS).&lt;br /&gt;To sum up the above process\, there is
  a proposed method which contains majorly two tasks:&lt;br /&gt;Tumor Segmentati
 on and Survival Prediction.&lt;br /&gt;Further\, there are the implementation de
 tails as pre-processing\, training and post-processing. Pre-processing boo
 sts network training and improves the performance. All four modality image
 s are bias field corrected. The network is trained on entire training imag
 e dataset. The network uses the combination of two different loss function
 s namely\, dice loss function and focal loss function. The prediction of s
 ingle image takes around one minute. Finally\, post-processing includes co
 nversion of the enhancing tumor into necrotic. Later\, the parameters like
  sensitivity and specificity are recorded in two datasets\; Training datas
 et and Validation Dataset.&lt;br /&gt;In the conclusion\, the proposal basically
  uses three-layer deep U-net based encoder-decoder architecture for the se
 gmentation. Each layer of the encoding side incorporates dense modules and
  decoding side convolution modules. Later\, network segmentation for cases
  with GTR tests RFR for overall survival prediction.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

