KeOps: Kernel Operations on the GPU, with autodiff, without memory overflows

#numerical #methods #convolution #gradient #deep #learning
Share

The KeOps library lets you compute generic reductions of large 2d arrays whose entries are given by a mathematical formula. It is perfectly suited to the computation of convolutions (or more generally to Kernel dot products) and the associated gradients (with an automatic differentiation engine).

KeOps is fast as it allows you to compute Gaussian convolution up to 40 times faster than a standard tensor algebra library that use GPU. KeOps is scalable and can be used on large data (typically from n=10^3 to n=10^7 number of rows/columns): it combines a tiled reduction scheme and works even when the full kernel matrix does not fit into the GPU memory. Finally, KeOps is easy to use as it comes with its Matlab, Python (NumPy or PyTorch) and R (coming soon) bindings.

Web site: http://www.kernel-operations.io

 



  Date and Time

  Location

  Hosts

  Registration



  • Date: 11 Jul 2019
  • Time: 12:30 PM to 01:30 PM
  • All times are (GMT-08:00) Canada/Pacific
  • Add_To_Calendar_icon Add Event to Calendar
  • 8888 University Drive
  • School of Engineering Science
  • Burnaby, British Columbia
  • Canada V5A 1S6
  • Building: Applied Sciences Building
  • Room Number: ASB 10900

  • Contact Event Host
  • Starts 08 July 2019 02:50 PM
  • Ends 11 July 2019 01:30 PM
  • All times are (GMT-08:00) Canada/Pacific
  • No Admission Charge


  Speakers

Dr. Benjamin Charlier Dr. Benjamin Charlier of University of Montpellier

Topic:

KeOps: Kernel Operations on the GPU, with autodiff, without memory overflows

The KeOps library lets you compute generic reductions of large 2d arrays whose entries are given by a mathematical formula. It is perfectly suited to the computation of convolutions (or more generally to Kernel dot products) and the associated gradients (with an automatic differentiation engine).

KeOps is fast as it allows you to compute Gaussian convolution up to 40 times faster than a standard tensor algebra library that use GPU. KeOps is scalable and can be used on large data (typically from n=10^3 to n=10^7 number of rows/columns): it combines a tiled reduction scheme and works even when the full kernel matrix does not fit into the GPU memory. Finally, KeOps is easy to use as it comes with its Matlab, Python (NumPy or PyTorch) and R (coming soon) bindings.

Web site: http://www.kernel-operations.io

 

Biography:

Benjamin Charlier obtained his PhD in mathematics and statistics in University of Toulouse on high dimensional statistics in non-linear spaces. He then spent one year in post-doc in École Normale Supérieur (Cachan) in the team of Alain Trouvé where he studied registration problems of functional shapes (point cloud or meshes on which a signal is attached).

Since 2013, he is assistant professor in University of Montpellier in the Stat/Math department. And since 2016 he is also part time visiting professor at the Spine and Brain institute (Paris). He now focus on computational methods dealing with geometry, variational methods, statistics and kernel spaces to analyse complex data extracted from medical imaging or medical studies.