Dimension Reduction & Maximum Likelihood: How to compress your data while retaining the key features
Prerequisites:
You do not need to have attended the earlier talks. If you know zero math and zero machine learning, then this talk is for you. Jeff will do his best to explain fairly hard mathematics to you. If you know a bunch of math and/or a bunch machine learning, then these talks are for you. Jeff tries to spin the ideas in new ways.
Longer Abstract:
A randomly chosen bit string cannot be compressed at all. But if there is a pattern to it, eg it represents an image, then maybe it can be compressed. Each pixel of an image is specified by one (or three) real numbers. If an image has thousands/millions of pixels, then each of these acts as a coordinate of the point where the image sits in a very high dimensional space. A set of such images then corresponds to a set of
these points. We can understand the pattern of points/images as follows. Maximum Likelihood assumes that the given set of points/images were randomly chosen according a multi-dimensional normal distribution and then adjusts the parameters of this normal distribution in the way that maximizes the probability of getting the images that we have. The obtained parameters effectively fits an ellipse around the points/images in this high dimensional space. We then reduce the number of dimensions in our space by collapsing this ellipse along its least significant axises. Projecting each point/image to this lower dimensional space compresses the amount of information needed to represent each image.
Date and Time
Location
Hosts
Registration
- Date: 30 Nov 2021
- Time: 05:00 PM to 06:30 PM
- All times are (GMT-05:00) Canada/Eastern
- Add Event to Calendar
- Starts 08 November 2021 09:00 PM
- Ends 30 November 2021 05:00 PM
- All times are (GMT-05:00) Canada/Eastern
- No Admission Charge