In this lecture we will discuss the mathematical foundations of 'machine learning'. There are roughly two classes of methods: 'supervised learning' usually refers to methods that estimate a figure y = f(x) from given input data x1, ...., xn and associated output data y1, ..., yn, which links input and output data. With 'unsupervised learning' the output data is not available. Instead, you can try to find structure in the input data. This structure can be geometrical in nature (is the input data on a diversity?), or topological in nature (which input data is'similar'? Are there interesting subgroups? How are they connected?).
In the lecture we will work out the mathematical basics of different machine learning methods. Our focus will be to understand why (and in what sense) these methods work. We will deepen our understanding by means of many numerical examples. Focal points inlcude: kernel regression, support vector machines, learning of manifolds, spectral clustering, high-dimensional probability.