
Radon Sobolev Variational Auto-Encoders
Turinici, Gabriel (2021), Radon Sobolev Variational Auto-Encoders, Neural Networks, 141, p. 294-305. 10.1016/j.neunet.2021.04.018
Voir/Ouvrir
Type
Article accepté pour publication ou publiéDate
2021Nom de la revue
Neural NetworksVolume
141Éditeur
Elsevier
Pages
294-305
Identifiant publication
Métadonnées
Afficher la notice complèteRésumé (EN)
The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice of a good probability distance. However some popular metrics like the Wasserstein or the Sliced Wasserstein distances, the Jensen–Shannon divergence, the Kullback–Leibler divergence, lack convenient properties such as (geodesic) convexity, fast evaluation and so on. To address these shortcomings, we introduce a class of distances that have built-in convexity. We investigate the relationship with some known paradigms (sliced distances – a synonym for Radon distances – reproducing kernel Hilbert spaces, energy distances). The distances are shown to possess fast implementations and are included in an adapted Variational Auto-Encoder termed Radon–Sobolev Variational Auto-Encoder (RS-VAE) which produces high quality results on standard generative datasets.Mots-clés
Variational Auto-Encoder; Generative model; Sobolev spaces; Radon Sobolev Variational Auto-EncoderPublications associées
Affichage des éléments liés par titre et auteur.
-
Turinici, Gabriel (2019) Document de travail / Working paper
-
Brugière, Pierre; Turinici, Gabriel (2022) Document de travail / Working paper
-
Bonforte, Matteo; Grillo, Gabriele (2007) Article accepté pour publication ou publié
-
Nardi, Giacomo; Peyré, Gabriel; Vialard, François-Xavier (2016) Article accepté pour publication ou publié
-
Bergounioux, Maïtine; Peyré, Gabriel; Schnörr, Christoph; Caillau, Jean-Baptiste; Haberkorn, Thomas (2017-01) Ouvrage