Input Intelligence on Mobile Devices

#"Input #Intelligence #on #Mobile #Devices" #by #Dr. #Jerome #R. #Bellegarda #from #Apple #Inc.
Share

Over the past decade, the confluence of sophisticated algorithms and tools, computational infrastructure, and data science has fueled a machine learning revolution across multiple fields, including speech and handwriting recognition, natural language processing, computer vision, social network filtering, and machine translation. Ensuing advances are changing the way we interact with technology in our daily lives. This is particularly salient when it comes to user input on mobile devices, be it speech, handwriting, touch, keyboard, or camera input. Increased input intelligence boosts device responsiveness across languages, improving not only basic abilities like tokenization, named entity recognition and part-of-speech tagging, but also more advanced capabilities like statistical language modeling and question answering. In this talk, I will give selected examples of what we are doing at Apple to impart input intelligence to mobile devices, with two overarching themes as sub-text: (i) enhancing interaction experience through machine learning, and (ii) transforming users' digital lives without sacrificing their privacy.



  Date and Time

  Location

  Hosts

  Registration



  • Date: 13 Apr 2022
  • Time: 12:00 PM to 01:00 PM
  • All times are (GMT-05:00) US/Eastern
  • Add_To_Calendar_icon Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
  • 1000 River Road
  • Teaneck , New Jersey
  • United States 07666
  • Building: Muscarelle Center, M105,
  • Room Number: M105

  • Contact Event Hosts
  • Co-sponsored by North Jersey Section
  • Starts 03 March 2022 08:59 AM
  • Ends 13 April 2022 12:00 PM
  • All times are (GMT-05:00) US/Eastern
  • No Admission Charge


  Speakers

Dr. Jerome R. Bellegarda of Apple Inc.

Topic:

Input Intelligence on Mobile Devices

Over the past decade, the confluence of sophisticated algorithms and tools, computational infrastructure, and data science has fueled a machine learning revolution across multiple fields, including speech and handwriting recognition, natural language processing, computer vision, social network filtering, and machine translation. Ensuing advances are changing the way we interact with technology in our daily lives. This is particularly salient when it comes to user input on mobile devices, be it speech, handwriting, touch, keyboard, or camera input. Increased input intelligence boosts device responsiveness across languages, improving not only basic abilities like tokenization, named entity recognition and part-of-speech tagging, but also more advanced capabilities like statistical language modeling and question answering. In this talk, I will give selected examples of what we are doing at Apple to impart input intelligence to mobile devices, with two overarching themes as sub-text: (i) enhancing interaction experience through machine learning, and (ii) transforming users' digital lives without sacrificing their privacy.

Biography:

Dr. Jerome R. Bellegarda (M’87-SM’92-F’03) is Apple Distinguished Scientist in Intelligent System Experience at Apple Inc., Cupertino, California, which he joined in 1994. Prior to that, he was a Research Staff Member at the IBM T.J. Watson Center, Yorktown Heights, New York. He received the Ph.D. degree in Electrical Engineering from the University of Rochester, Rochester, New York, in 1987. Among his diverse contributions to speech and language advances over the years, he pioneered the use of tied mixtures in acoustic modeling and latent semantics in language modeling. In addition, he was instrumental to the due diligence process leading to Apple's acquisition of Siri personal assistant technology and its integration into the Apple ecosystem. His general interests span machine learning applications, statistical modeling algorithms, natural language processing, man-machine communication, multiple input/output modalities, and multimedia knowledge management. In these areas he has written close to 200 publications, and holds over 100 U.S. and foreign patents. He has served on many international scientific committees, review panels, and advisory boards. In particular, he has worked as Expert Advisor on speech and language technologies for both the U.S. National Science Foundation and the European Commission, served on the IEEE Signal Processing Society (SPS) Speech Technical Committee, was Associate Editor for the IEEE Transactions on Audio, Speech and Language Processing, and is currently an Editorial Board member for Speech Communication. He was recently selected as one of the 2022 IEEE SPS Distinguished Industry Speakers. He is a Fellow of both IEEE and ISCA (International Speech Communication Association).

 

Email:

Address:United States





Agenda

Over the past decade, the confluence of sophisticated algorithms and tools, computational infrastructure, and data science has fueled a machine learning revolution across multiple fields, including speech and handwriting recognition, natural language processing, computer vision, social network filtering, and machine translation. Ensuing advances are changing the way we interact with technology in our daily lives. This is particularly salient when it comes to user input on mobile devices, be it speech, handwriting, touch, keyboard, or camera input. Increased input intelligence boosts device responsiveness across languages, improving not only basic abilities like tokenization, named entity recognition and part-of-speech tagging, but also more advanced capabilities like statistical language modeling and question answering. In this talk, I will give selected examples of what we are doing at Apple to impart input intelligence to mobile devices, with two overarching themes as sub-text: (i) enhancing interaction experience through machine learning, and (ii) transforming users' digital lives without sacrificing their privacy.