# Optimal Transport in One Dimension

In this article, we briefly explore the optimal transport problem on the real line along with some examples.

## Linear Cost Example

For this example, consider the cost function ${\displaystyle c(x,y)=L(x-y)}$ along with a given linear map ${\displaystyle A:\mathbb {R} ^{d}\rightarrow \mathbb {R} }$. Moreover, if let $\displaystyle \gamma$ be any transport plan, then by direct computation we see that

                                                                                ${\displaystyle \int L(x-y)d\gamma =\int L(x)d\gamma -\int L(y)d\gamma =\int L(x)d\mu -\int L(y)d\nu }$


which suggests that this result only depends on the marginals of ${\displaystyle \gamma }$ (wherein ${\displaystyle \mu }$ and $\displaystyle \nu$ are compactly supported probability measures). In fact, in such cases, every transport plan/map is optimal.

## Distance Cost Example

Consider the cost function ${\displaystyle c(x,y)=|x-y|}$ along with probability measures (on $\displaystyle \mathbb{R}$ ) ${\displaystyle \mu }$ and ${\displaystyle \nu }$. Then, for any ${\displaystyle (x,y)\in spt(\mu )\times spt(\nu )}$ we see that ${\displaystyle c(x,y)=y-x}$, which then immediately puts us back in the linear cost position, so any transport map/plan is also optimal for such costs.

## Book Shifting Example

Consider the cost function ${\displaystyle c(x,y)=|x-y|}$ along with $\displaystyle \mu = \frac{1}{2} \lambda_{[0,2]}$ and ${\displaystyle \nu ={\frac {1}{2}}\lambda _{[1,3]}}$ (where ${\displaystyle \lambda }$ is the one-dimensional Lebesgue measure). A (monotone) transport plan that rearranges ${\displaystyle \mu }$ to look like ${\displaystyle \nu }$ is given by $\displaystyle T_0(x) = x+1$ and its corresponding cost is

                                                                                               ${\displaystyle M(T_{0})=\int |T_{0}(x)-x|d\mu \equiv 1}$.


Furthermore, notice that the piecewise map ${\displaystyle T_{1}}$ given by ${\displaystyle T_{1}(x)=x+2}$ (for ${\displaystyle x\leq 1}$) and ${\displaystyle T_{1}(x)=x}$ (for ${\displaystyle x>1}$) satisfies ${\displaystyle T_{1}\#\mu =\nu }$, i.e. ${\displaystyle T_{1}}$ is a transport map from ${\displaystyle \mu }$ to $\displaystyle \nu$ ; moreover, the corresponding cost is

                                                                                          ${\displaystyle M(T_{1})=\int |T_{1}(x)-x|d\mu ={\frac {1}{2}}\int _{0}^{2}2dx\equiv 1}$


and so we conclude that ${\displaystyle T_{1}}$ is indeed optimal as well.

Theorem: Let ${\displaystyle \mu ,\nu }$ be probability measures on $\displaystyle \mathbb{R}$ with cumulative distribution functions (CDFs) ${\displaystyle F}$ and ${\displaystyle G}$, respectively. Also, let $\displaystyle \pi$ be the probability measure on ${\displaystyle \mathbb {R} ^{2}}$ with the CDF $\displaystyle H(x,y) = \min (F(x), G(y))$ . Then, ${\displaystyle \pi \in \Gamma (\mu ,\nu )}$ and is optimal (in the Kantorovich problem setting) between ${\displaystyle \mu }$ and ${\displaystyle \nu }$ for the (quadratic) cost function ${\displaystyle c(x,y)=|x-y|^{2}}$, and the corresponding cost is

                                                                                            ${\displaystyle T_{2}(\mu ,\nu )=\int _{0}^{1}|F^{-1}(t)-G^{-1}(t)|^{2}dt}$


where ${\displaystyle F^{-1}}$ and ${\displaystyle G^{-1}}$ are the pseudo-inverses of the respective CDFs.

### Ideas and Remarks for the Proof

Note the proof from Cedric-Villani gives this result in arbitrary dimensions. Below is a rough outline of proof, and the full details can be found in "Topics in Optimal Transportation" (Villani, cite later). Moreover, the measure $\displaystyle \pi$ constructed in the theorem indeed is optimal provided that the cost function ${\displaystyle c(x,y)}$ is of the form ${\displaystyle c(x-y)}$ where ${\displaystyle c}$ is a convex, nonnegative symmetric function on ${\displaystyle \mathbb {R} }$.

One of the first major steps in proving this theorem is showing that ${\displaystyle supp(\pi )\subset \{(x,y)\in \mathbb {R} ^{2}:F(x^{-})\leq G(y){\text{ and }}G(y^{-})\leq F(x)\}}$ by considering specific cases. Upon showing this, we may conclude that ${\displaystyle \pi }$ is supported in a monotone subset of ${\displaystyle \mathbb {R} ^{2}}$ and hence also supported in the sub-differential of some lower semi-continuous convex function. From here, we make use of the Knott-Smith optimality criterion (Villani pg 66) which establishes that ${\displaystyle \pi }$ is an optimal transference plan. Then, upon showing that ${\displaystyle \pi =(F^{-1}\times G^{-1})\#\lambda _{[0,1]}}$, we see that for any nonnegative, measurable function ${\displaystyle u}$ on ${\displaystyle \mathbb {R} ^{2}}$

                                                                                         ${\displaystyle \int _{\mathbb {R} ^{2}}u(x,y)d\pi (x,y)=\int _{0}^{1}u(F^{-1}(t),G^{-1}(t))dt}$.


This then immediately yields the cost ${\displaystyle T_{2}(\mu ,\nu )}$ and completes the proof.