0.2 C
London
HomeOn a Neural Implementation of Brenier's Polar Factorization

On a Neural Implementation of Brenier’s Polar Factorization

Related stories

Farewell: Fintech Nexus is shutting down

When we started Fintech Nexus in 2013 (known as...

Goldman Sachs loses profit after hits from GreenSky, real estate

Second-quarter profit fell 58% to $1.22 billion, or $3.08...

What Are the Benefits of Using Turnitin AI Detection?

In today’s digital age, academic integrity faces new challenges...

This Man Can Make Anyone Go Viral

Logan Forsyth helps his clients get hundreds of millions of...

In 1991, Brenier proved a theorem that generalizes the polar decomposition for square matrices — factored as PSD ×\times× unitary — to any vector field F:Rd→RdF:\mathbb{R}^d\rightarrow \mathbb{R}^dF:Rd→Rd. The theorem, known as the polar factorization theorem, states that any field FFF can be recovered as the composition of the gradient of a convex function uuu with a measure-preserving map MMM, namely F=∇u∘MF=\nabla u \circ MF=∇u∘M. We propose a practical implementation of this far-reaching theoretical result, and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory, and we borrow from recent advances in the field of neural optimal transport to parameterize the potential uuu as an input convex neural network. The map MMM can be either evaluated pointwise using u∗u^*u∗, the convex conjugate of uuu, through the identity M=∇u∗∘FM=\nabla u^* \circ FM=∇u∗∘F, or learned as an auxiliary network. Because MMM is, in general, not injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measure M−1M^{-1}M−1 using a stochastic generator. We illustrate possible applications of Brenier’s polar factorization to non-convex optimization problems, as well as sampling of densities that are not log-concave.

Latest stories