In-Bed Pose Estimation: Deep Learning with Shallow Dataset

#Advanced #Machine #Learning #Deep #Large #Medical #Datasets #Multimodal #Neuroimaging #Statistical #Learning.
Share

The Montreal Chapter of the IEEE Signal Processing (SP) Society cordially invites you to attend the following talk, to be given by Prof. Sarah Ostadabbas, Technical Co-Chair of Symposium on Signal Processing and Machine Learning in Large Medical Datasets from Electrical and Computer Engineering Department, Northeastern University, Boston, MA, USA, on Friday November 17th 2017, from 11h00 to 12h00pm at Concordia University (EV Building, Room 11.119).



  Date and Time

  Location

  Hosts

  Registration



  • Date: 17 Nov 2017
  • Time: 11:00 AM to 12:00 PM
  • All times are (GMT-05:00) America/Montreal
  • Add_To_Calendar_icon Add Event to Calendar
  • 1515 Saint-Catherine St. West
  • Montreal, Quebec
  • Canada H3G 2W1
  • Building: EV-Biulding
  • Room Number: 11.119

  • Contact Event Host
  • Prof. Arash Mohammadi

    Concordia Institute for Information System Engineering (CIISE)

    Concordia University,

    Montreal, QC, H3G 2W1, Canada

  • Starts 03 November 2017 12:00 AM
  • Ends 16 November 2017 05:00 PM
  • All times are (GMT-05:00) America/Montreal
  • No Admission Charge


  Speakers

Dr. Sarah Ostadabbas of Northeastern University

Topic:

In-Bed Pose Estimation: Deep Learning with Shallow Dataset

Deep learning approaches have been rapidly adopted across a wide range of fields because of their accuracy and flexibility. They provide highly scalable solutions for problems in object detection and recognition, machine translation, text-to-speech, and recommendation systems, all of which require large amounts of labeled data. This presents a fundamental problem for applications with limited, expensive, or private data (i.e. small data), such as healthcare. One example of these applications with the small data challenges is human in-bed pose estimation. In-bed poses provide information about the health state of the person and accurate recognition of these poses can lead to better diagnosis and predictive care. In this talk, I present the idea of using a small (limited in size and different in perspective and color) dataset collected from in-bed poses to retrained a convolutional neural network (CNN), which was pre-trained on general human poses. We showed that classical fine-tuning principle is not always effective and the network architecture matters. For the specific human pose estimation CNN, our proposed fine-tuning model demonstrated clear improvement over the classical ones when applied on sleeping poses.

Biography:

Sarah Ostadabbas is a second year assistant professor at the Electrical and Computer Engineering Department of Northeastern University (NEU). Sarah joined NEU from Georgia Tech, where she was a post-doctoral researcher following completion of her PhD at the University of Texas at Dallas in 2014. At NEU, Sarah has recently formed the Augmented Cognition Laboratory (ACLab) with the goal of enhancing human information-processing capabilities through the design of adaptive interfaces via physical, physiological, and cognitive state estimation. These interfaces are based on rigorous models adaptively parameterized using machine learning and computer vision algorithms. Sarah has over five years of experience developing human-machine interaction technologies successfully bridging artificial intelligence (AI) and human intelligent amplification (IA) fields. With the support of an NSF SBIR grant in 2013, as the PI, she was involved in the commercialization of a decision-support software/interface to prevent pressure ulcers in bed-bound patients by suggesting a resource-efficient posture changing schedule. While being based on a theoretical/analytical framework, at the same time, her solutions aim to address the required system integration challenges presented in the human-centric designs. The first step for most projects at ACLab is creating a framework to estimate the coupling between the user and the machine: that is how rapidly and accurately the user perceives information related to their environment, or how rapidly they can send commands to the machine. The second step is to use this model to create an interface to maximize this information transfer. When the goal is to replace functionality lost to disease or disability, it is called “digital prosthetics”. When the goal is to increase human functionality beyond the norm, it is called “intelligence amplification”. For many of these projects, augmented reality (AR) and virtual reality (VR) tools are essential for both the assessment and enhancement portions of the project. Professor Ostadabbas is the co-author of more than 40 peer-reviewed journal and conference articles, and is an inventor on two US patent applications. ​She is a member of IEEE, IEEE Women in Engineering, IEEE Signal Processing Society, IEEE EMBS, IEEE Young Professionals, and ACM SIGCHI​.

Dr. Sarah Ostadabbas of Northeastern University

Topic:

In-Bed Pose Estimation: Deep Learning with Shallow Dataset

Biography: