BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Eastern
BEGIN:DAYLIGHT
DTSTART:20200308T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20201101T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20201005T174806Z
UID:28F2747D-9D9D-4F2C-BC7D-906A39F204A1
DTSTART;TZID=Canada/Eastern:20200924T143000
DTEND;TZID=Canada/Eastern:20200924T153000
DESCRIPTION:The demand for video streaming is growing every day which impli
 es a higher demand for new video transmitting and compression techniques t
 o avoid data traffics over telecommunication networks. In this dissertatio
 n\, we studied saliency detection in order to apply it to video streaming 
 problem to be able to transmit different regions of video frames in a rank
 ed manner based on their importance (i.e.\, saliency). Salient areas are t
 he regions of interest that stand out relative to their surroundings and c
 onsequently attract more attention. To determine the salient areas within 
 a scene\, visual importance and distinctiveness of the regions must be mea
 sured. The lack of a comprehensive and precise biologically-inspired study
  on the saliency of bottom-up stimuli prevents justifying the level of imp
 ortance for different stimuli such as color\, luminance\, texture\, and mo
 tion on the human visual system (HVS). To overcome this barrier\, we inves
 tigated the bottom-up features using an eye-tracking procedure and human s
 ubjects in video sequences to provide a ranking saliency system stating th
 e most dominant elements for each feature individually as well as in combi
 nation with other features. The experiment was performed under conditions 
 in which we had no cognitive bias in order to speed up the video streaming
  procedure. Next\, we introduced a gradual saliency detection framework fo
 r both still images and video sequences using color\, texture\, and motion
  features (based on our experimental estimations). In our algorithm\, we p
 roposed new feature maps for color and texture features\, and we also impr
 oved the optical flow field estimation in our motion map. Finally\, differ
 ent feature maps were combined and classified as different saliency levels
  using a Naive Bayesian Network. This work provides a benchmark to specify
  the gradual saliency for both static and dynamic (i.e.\, moving backgroun
 ds) scenes. The main contribution of this work is the ability to assign a 
 gradual saliency for the entirety of an image/video frame rather than simp
 ly extracting a salient object/area\, which is widely performed in the sta
 te-of-the-art.\n\nSpeaker(s): Dr. Jila Hosseinkhani \, \n\nVirtual: https:
 //events.vtools.ieee.org/m/240720
LOCATION:Virtual: https://events.vtools.ieee.org/m/240720
ORGANIZER:m.abdelazez.ca@ieee.org
SEQUENCE:2
SUMMARY:IEEE EMBS - Gradual Saliency Detection in Video Sequences Using Bot
 tom-up Attributes
URL;VALUE=URI:https://events.vtools.ieee.org/m/240720
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;The demand for video streaming is growing 
 every day which implies a higher demand for new video transmitting and com
 pression techniques to avoid data traffics over telecommunication networks
 . In this dissertation\, we studied saliency detection in order to apply i
 t to video streaming problem to be able to transmit different regions of v
 ideo frames in a ranked manner based on their importance (i.e.\, saliency)
 . Salient areas are the regions of interest that stand out relative to the
 ir surroundings and consequently attract more attention. To determine the 
 salient areas within a scene\, visual importance and distinctiveness of th
 e regions must be measured. The lack of a comprehensive and precise biolog
 ically-inspired study on the saliency of bottom-up stimuli prevents justif
 ying the level of importance for different stimuli such as color\, luminan
 ce\, texture\, and motion on the human visual system (HVS). To overcome th
 is barrier\, we investigated the bottom-up features using an eye-tracking 
 procedure and human subjects in video sequences to provide a ranking salie
 ncy system stating the most dominant elements for each feature individuall
 y as well as in combination with other features. The experiment was perfor
 med under conditions in which we had no cognitive bias in order to speed u
 p the video streaming procedure. Next\, we introduced a gradual saliency d
 etection framework for both still images and video sequences using color\,
  texture\, and motion features (based on our experimental estimations). In
  our algorithm\, we proposed new feature maps for color and texture featur
 es\, and we also improved the optical flow field estimation in our motion 
 map. Finally\, different feature maps were combined and classified as diff
 erent saliency levels using a Naive Bayesian Network. This work provides a
  benchmark to specify the gradual saliency for both static and dynamic (i.
 e.\, moving backgrounds) scenes. The main contribution of this work is the
  ability to assign a gradual saliency for the entirety of an image/video f
 rame rather than simply extracting a salient object/area\, which is widely
  performed in the state-of-the-art.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

