BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:19451014T230000
TZOFFSETFROM:+0630
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20231216T234142Z
UID:48BC32B7-167F-442C-9E36-D4D3962A1B1E
DTSTART;TZID=Asia/Kolkata:20231125T103000
DTEND;TZID=Asia/Kolkata:20231125T113000
DESCRIPTION:ndian Sign Language acts as a medium of communication by visual
 ly impaired\, deaf and dumb people who constitute a significant portion of
  Indian population. Most people find it difficult to apprehend ISL gesture
 s. This has created a communication gap between the hearing and speech imp
 aired and those who do not understand ISL. This project aims to bridge thi
 s gap of communication by developing a model that converts Indian sign lan
 guage to text. Mediapipe python library provides detection solutions for h
 ands\, face\, etc. Mediapipe hands out 21 landmarks of a hand. Using these
  landmarks\, the region of hands can be segmented. A sign language gesture
  is represented as a video sequence which consists of spatial and temporal
  features. Spatial features are extracted from the frames of the video and
  the temporal features are extracted by relating the frames of video with 
 respect to time. A model can be trained on the spatial features using CNN 
 and temporal features using RNN\, to convert Indian Sign Language to text.
 \n\nSpeaker(s): Prof.Anitha\, \n\nVirtual: https://events.vtools.ieee.org/
 m/385082
LOCATION:Virtual: https://events.vtools.ieee.org/m/385082
ORGANIZER:kundammrao@gmail.com
SEQUENCE:18
SUMMARY:Automatic conversion of Indian Sign Language (ISL) to Natural Langu
 age: A Case Study&quot;
URL;VALUE=URI:https://events.vtools.ieee.org/m/385082
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;ndian Sign Language acts as a medium of co
 mmunication by visually impaired\, deaf and dumb people who constitute a s
 ignificant portion of Indian population. Most people find it difficult to 
 apprehend ISL gestures. This has created a communication gap between the h
 earing and speech impaired and those who do not understand ISL. This proje
 ct aims to bridge this gap of communication by developing a model that con
 verts Indian sign language to text. Mediapipe python library provides dete
 ction solutions for hands\, face\, etc. Mediapipe hands out 21 landmarks o
 f a hand. Using these landmarks\, the region of hands can be segmented. A 
 sign language gesture is represented as a video sequence which consists of
  spatial and temporal features. Spatial features are extracted from the fr
 ames of the video and the temporal features are extracted by relating the 
 frames of video with respect to time. A model can be trained on the spatia
 l features using CNN and temporal features using RNN\, to convert Indian S
 ign Language to text.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

