Input Intelligence on Mobile Devices

#Intelligence #on #mobile #devices; #enhancing #interaction #experience #through #machine #learning; #transforming #users #digital #lives #without #sacrificing #their #privacy
Share

The IEEE Atlanta Chapter of the Signal Processing Society and the Engineering in Medicine and Biology Chapter is hosting IEEE SPS Distinguished Industry Lecturer Dr. Jerome Bellegarda to give a virtual talk:

"Input Intelligence on Mobile Devices"

Abstract. Over the past decade, the confluence of sophisticated algorithms and tools, computational infrastructure, and data science has fueled a machine learning revolution across multiple fields, including speech and handwriting recognition, natural language processing, computer vision, social network filtering, and machine translation. Ensuing advances are changing the way we interact with technology in our daily lives. This is particularly salient when it comes to user input on mobile devices, be it speech, handwriting, touch, keyboard, or camera input.  Increased input intelligence boosts device responsiveness across languages, improving not only basic abilities like tokenization, named entity recognition and part-of-speech tagging, but also more advanced capabilities like statistical language modeling and question answering. In this talk, I will give selected examples of what we are doing at Apple to impart input intelligence to mobile devices, with two overarching themes as sub-text: (i) enhancing interaction experience through machine learning, and (ii) transforming users' digital lives without sacrificing their privacy.



  Date and Time

  Location

  Hosts

  Registration



  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • Atlanta, Georgia
  • United States

  • Contact Event Hosts
  • Starts 28 March 2022 10:00 PM UTC
  • Ends 04 May 2022 05:00 PM UTC
  • No Admission Charge


  Speakers

Dr. Jerome Bellegarda Dr. Jerome Bellegarda of Apple Inc.

Topic:

Input Intelligence on Mobile Devices

Abstract. Over the past decade, the confluence of sophisticated algorithms and tools, computational infrastructure, and data science has fueled a machine learning revolution across multiple fields, including speech and handwriting recognition, natural language processing, computer vision, social network filtering, and machine translation. Ensuing advances are changing the way we interact with technology in our daily lives. This is particularly salient when it comes to user input on mobile devices, be it speech, handwriting, touch, keyboard, or camera input.  Increased input intelligence boosts device responsiveness across languages, improving not only basic abilities like tokenization, named entity recognition and part-of-speech tagging, but also more advanced capabilities like statistical language modeling and question answering. In this talk, I will give selected examples of what we are doing at Apple to impart input intelligence to mobile devices, with two overarching themes as sub-text: (i) enhancing interaction experience through machine learning, and (ii) transforming users' digital lives without sacrificing their privacy.

Biography:

Dr. Jerome R. Bellegarda (M’87-SM’92-F’03) is Apple Distinguished Scientist in Intelligent System Experience at Apple Inc., Cupertino, California, which he joined in 1994.  He received the Ph.D. degree in Electrical Engineering from the University of Rochester, Rochester, New York, in 1987. Among his diverse contributions to speech and language advances over the years, he pioneered the use of tied mixtures in acoustic modeling and latent semantics in language modeling. In addition, he was instrumental to the due diligence process leading to Apple's acquisition of Siri personal assistant technology and its integration into the Apple ecosystem. His general interests span machine learning applications, statistical modeling algorithms, natural language processing, man-machine communication, multiple input/output modalities, and multimedia knowledge management. In these areas he has written close to 200 publications, and holds over 100 U.S. and foreign patents. He has worked as Expert Advisor on speech and language technologies for both the U.S. National Science Foundation and the European Commission, served on the IEEE Signal Processing Society (SPS) Speech Technical Committee, was Associate Editor for the IEEE Transactions on Audio, Speech and Language Processing, and is currently an Editorial Board member for Speech Communication.  He is a Fellow of both IEEE and ISCA (International Speech Communication Association).

Email:





  Media

Event Flyer 241.33 KiB