$\vec{w}h\alpha\mathfrak{t}\;\; \forall\mathbb{R}\varepsilon\ldots$

gradient flows and optimal transport for machine learning and optimization?


Who?
Jia-Jie Zhu (WIAS)
When?
2024/07/12, 13:00 s.t.
Before the MATH+ Friday colloquium talk by Martin Burger
Where?
Physikalisch-Technische Bundesanstalt (PTB), Anna-von-Helmholtz-Bau, Lecture Hall
About what?

In this talk, I will provide an overview of gradient flows over non-negative and probability measures, e.g. in the Wasserstein space, and their application in modern machine learning tasks, such as variational inference, sampling, training of over-parameterized models, and robust optimization. Then, I will present the high level idea of theoretical analysis as well as our recent work in unbalanced transport, such as Hellinger-Kantorovich (a.k.a. Wasserstein-Fisher-Rao), and its particle approximation for machine learning.

The talk is mainly based on the recent joint works with Alexander Mielke, Pavel Dvurechensky, and Egor Gladin.