Neural networks (NN) constitute a class of functions that nonlinearly depend on a finite number of parameters which in turn are typically used for (optimal) approximation of an unknown function from a given set of function values.
In this seminar, we plan first to investigate the richness and approximation power of neural networks by considering topics like Hilbert's 13th problem, approximation error estimates, and discretization errror estimates for NN-discretizations of partial differential equations.
Furthermore we will adress the algebraic minimization problems for (optimal) NN-parameters. Popular numerical solution methods like stochastic gradient-type descent will finally lead us to novel reinterpretations of NN in terms of fluctuating hydrodynamics that are currently investigated in a MATH+ project of the organizers.
Participants are expected to have basic knowledge in numerical analysis as communicated, e.g. in the lectures Numerik I - III at FU Berlin.