Unser 2020/09/21, Splines and Machine Learning: From Classical RKHS Methods to Deep Neural Nets (M. Unser)

Biomedical Imaging Group

6 October 2020, Philippe Thévenaz, 19 views

M. Unser

Keynote address, IEEE International Workshop on Machine Learning for Signal Processing (MLSP'20), Espoo, Republic of Finland, Virtual, September 21-24, 2020.

Supervised learning is a fundamentally ill-posed problem. In practice, this indetermination is dealt with by imposing constraints on the solution; these are either implicit, as in neural networks, or explicit via the use of a regularization functional. In this talk, I present a unifying perspective that revolves around a new representer theorem that characterizes the solution of a broad class of functional optimization problems. I then use this theorem to derive the most prominent classical algorithms — e.g., kernel-based techniques and smoothing splines — as well as their “sparse” counterparts. This leads to the identification of sparse adaptive splines, which have some remarkable properties.

I then show how the latter can be integrated in conventional neural architectures to yield high-dimensional adaptive linear splines.  Finally, I recover deep neural nets with ReLU activations as a particular case.

Viewable by everyone. CC BY-NC-ND licensed.