In this talk, I will provide an overview of gradient flows over non-negative and probability measures, e.g. in the Wasserstein space, and their application in modern machine learning tasks, such as variational inference, sampling, training of over-parameterized models, and robust optimization. Then, I will present the high level idea of theoretical analysis as well as our recent work in unbalanced transport, such as Hellinger-Kantorovich (a.k.a. Wasserstein-Fisher-Rao), and its particle approximation for machine learning.
The talk is mainly based on the recent joint works with Alexander Mielke, Pavel Dvurechensky, and Egor Gladin.