Skip to content

Commit 940068f

Browse files
authored
Introduce the Riemannian Chambolle-Pock (#40)
* introduces a Riemannian Chambolle Poch plan. * Introduces all examples from the paper * generalises the original (l/e)RCPA to also run with retractions, inverse retractions and vector transport instead of just with exp/log and parallel transport
1 parent aa55fdc commit 940068f

File tree

83 files changed

+5452
-777
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+5452
-777
lines changed

.github/workflows/ci.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ jobs:
1111
runs-on: ${{ matrix.os }}
1212
strategy:
1313
matrix:
14-
julia-version: [1.0, 1.4]
14+
julia-version: [1.0, 1.5]
1515
os: [ubuntu-latest, macOS-latest, windows-latest]
1616
steps:
1717
- uses: actions/checkout@v2
@@ -27,4 +27,4 @@ jobs:
2727
file: ./lcov.info
2828
name: codecov-umbrella
2929
fail_ci_if_error: false
30-
if: ${{ matrix.julia-version == '1.4' && matrix.os =='ubuntu-latest' }}
30+
if: ${{ matrix.julia-version == '1.5' && matrix.os =='ubuntu-latest' }}

.gitignore

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,12 @@
44
/docs/build/
55
/docs/site/
66
Tutorials/.ipynb_checkpoints
7-
src/examples/**/*.asy
8-
src/examples/**/*.png
9-
src/examples/**/*.csv
10-
src/examples/**/*.tex
11-
src/examples/**/*.pdf
12-
src/examples/**/*.mp4
13-
src/examples/**/*.jld2
7+
examples/**/*.asy
8+
examples/**/*.png
9+
examples/**/*.csv
10+
examples/**/*.tex
11+
examples/**/*.pdf
12+
examples/**/*.mp4
13+
examples/**/*.jld2
1414
docs/src/tutorials/*.md
1515
.vscode
16-
examples/Total_Variation/S2_TV
17-
examples/Total_Variation/Hn_TV
18-
examples/Total_Variation/S1_TV

Project.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "Manopt"
22
uuid = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5"
33
authors = ["Ronny Bergmann <manopt@ronnybergmann.net>"]
4-
version = "0.2.10"
4+
version = "0.2.11"
55

66
[deps]
77
ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
@@ -22,7 +22,7 @@ Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
2222
ColorSchemes = "3.5.0"
2323
ColorTypes = "0.9.1, 0.10"
2424
Colors = "0.11.2, 0.12"
25-
Manifolds = "0.4.1"
25+
Manifolds = "0.4.8"
2626
ManifoldsBase = "0.9"
2727
StaticArrays = "0.12"
2828
julia = "1.0"

docs/make.jl

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,7 @@ for (i, tutorial) in enumerate(tutorials)
1818
global TutorialMenu
1919
sourceFile = joinpath(tutorialsInputPath, tutorial * ".jl")
2020
targetFile = joinpath(tutorialsOutputPath, tutorial * "md")
21-
Literate.markdown(
22-
sourceFile,
23-
tutorialsOutputPath;
24-
name=tutorial,
25-
# codefence = "```julia" => "```",
26-
credit=false,
27-
)
21+
Literate.markdown(sourceFile, tutorialsOutputPath; name=tutorial, credit=false)
2822
push!(TutorialMenu, menuEntries[i] => joinpath(tutorialsRelativePath, tutorial * ".md"))
2923
end
3024
makedocs(;
@@ -38,10 +32,11 @@ makedocs(;
3832
"Plans" => "plans/index.md",
3933
"Solvers" => [
4034
"Introduction" => "solvers/index.md",
35+
"Chambolle-Pock" => "solvers/ChambollePock.md",
4136
"Conjugate gradient descent" => "solvers/conjugate_gradient_descent.md",
4237
"Cyclic Proximal Point" => "solvers/cyclic_proximal_point.md",
4338
"Douglas–Rachford" => "solvers/DouglasRachford.md",
44-
"Gradient Descent" => "solvers/gradientDescent.md",
39+
"Gradient Descent" => "solvers/gradient_descent.md",
4540
"Nelder–Mead" => "solvers/NelderMead.md",
4641
"Particle Swarm Optimization" => "solvers/particle_swarm.md",
4742
"Subgradient method" => "solvers/subgradient.md",
@@ -52,13 +47,13 @@ makedocs(;
5247
"Functions" => [
5348
"Introduction" => "functions/index.md",
5449
"Bézier curves" => "functions/bezier.md",
55-
"Cost functions" => "functions/costFunctions.md",
50+
"Cost functions" => "functions/costs.md",
5651
"Differentials" => "functions/differentials.md",
57-
"Adjoint Differentials" => "functions/adjointDifferentials.md",
52+
"Adjoint Differentials" => "functions/adjoint_differentials.md",
5853
"Gradients" => "functions/gradients.md",
59-
"JacobiFields" => "functions/jacobiFields.md",
60-
"Proximal Maps" => "functions/proximalMaps.md",
61-
"Specific manifold functions" => "functions/manifold.md",
54+
"Jacobi Fields" => "functions/Jacobi_fields.md",
55+
"Proximal Maps" => "functions/proximal_maps.md",
56+
"Specific Manifold Functions" => "functions/manifold.md",
6257
],
6358
"Helpers" => [
6459
"Data" => "helpers/data.md",

docs/src/functions/jacobiFields.md renamed to docs/src/functions/Jacobi_fields.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,5 @@ The following weights functions are available
2626

2727
```@autodocs
2828
Modules = [Manopt]
29-
Pages = ["jacobiFields.jl"]
29+
Pages = ["Jacobi_fields.jl"]
3030
```

docs/src/functions/adjointDifferentials.md renamed to docs/src/functions/adjoint_differentials.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,5 +2,5 @@
22

33
```@autodocs
44
Modules = [Manopt]
5-
Pages = ["adjointDifferentials.jl"]
5+
Pages = ["adjointdifferentials.jl"]
66
```
File renamed without changes.

docs/src/functions/proximalMaps.md renamed to docs/src/functions/proximal_maps.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ $\displaystyle\operatorname{prox}_{\lambda\varphi}(x^\star) = x^\star,$
1717
i.e. a minimizer is a fixed point of the proximal map.
1818

1919
This page lists all proximal maps available within Manopt. To add you own, just
20-
extend the `functions/proximalMaps.jl` file.
20+
extend the `functions/proximal_maps.jl` file.
2121

2222
```@autodocs
2323
Modules = [Manopt]
24-
Pages = ["proximalMaps.jl"]
24+
Pages = ["proximal_maps.jl"]
2525
```

docs/src/plans/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Further specific [`RecordAction`](@ref)s can be found at the specific Options.
7878
there's one internal helper that might be useful for you own actions, namely
7979

8080
```@docs
81-
record_or_eset!
81+
record_or_reset!
8282
```
8383

8484
### [Stepsize and Linesearch](@id Stepsize)
@@ -141,7 +141,7 @@ get_subgradient
141141

142142
```@docs
143143
ProximalProblem
144-
getProximalMap
144+
get_proximal_map
145145
```
146146

147147
### Further planned problems

docs/src/solvers/ChambollePock.md

Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
# [The Riemannian Chambolle-Pock Algorithm](@id ChambollePockSolver)
2+
3+
The Riemannian Chambolle–Pock is a generalization of the Chambolle–Pock algorithm[^ChambollePock2011].
4+
It is also known as primal dual hybrig gradient (PDHG) or primal dual proximal splitting (PDPS) algorithm.
5+
6+
In order to minimize over $p\in\mathcal M§ the cost function consisting of
7+
8+
```math
9+
F(p) + G(\Lambda(p)),
10+
```
11+
12+
where $F:\mathcal M \to \overline{\mathbb R}$, $G:\mathcal N \to \overline{\mathbb R}$, and
13+
$\Lambda:\mathcal M \to\mathcal N$.
14+
If the manifolds $\mathcal M$ or $\mathcal N$ are not Hadamard, it has to be considered locally,
15+
i.e. on geodesically convex sets $\mathcal C \subset \mathcal M$ and $\mathcal D \subset\mathcal N$
16+
such that $\Lambda(\mathcal C) \subset \mathcal D$.
17+
18+
The algorithm is available in four variants: exact versus linearized (see `variant`)
19+
as well as with primal versus dual relaxation (see `relax`). For more details, see
20+
[^BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020].
21+
In the following we note the case of the exact, primal relaxed Riemannian Chambolle–Pock algorithm.
22+
23+
Given base points $m\in\mathcal C$, $n=\Lambda(m)\in\mathcal D$,
24+
initial primal and dual values $p^{(0)} \in \mathcal C$, $\xi_n^{(0)} \in T_n^*\mathcal N$,
25+
and primal and dual step sizes $\sigma_0$, $\tau_0$, relaxation $\theta_0$,
26+
as well as acceleration $\gamma$.
27+
28+
As an initialization, perform $\bar p^{(0)} \gets p^{(0)}$.
29+
30+
The algorithms performs the steps $k=1,\ldots,$ (until a [`StoppingCriterion`](@ref) is fulfilled with)
31+
32+
1. ```math
33+
\xi^{(k+1)}_n = \operatorname{prox}_{\tau_k G_n^*}\Bigl(\xi_n^{(k)} + \tau_k \bigl(\log_n \Lambda (\bar p^{(k)})\bigr)^\flat\Bigr)
34+
```
35+
2. ```math
36+
p^{(k+1)} = \operatorname{prox}_{\sigma_k F}\biggl(\exp_{p^{(k)}}\Bigl( \operatorname{PT}_{p^{(k)}\gets m}\bigl(-\sigma_k D\Lambda(m)^*[\xi_n^{(k+1)}]\bigr)^\sharp\Bigr)\biggr)
37+
```
38+
3. Update
39+
* ``\theta_k = (1+2\gamma\sigma_k)^{-\frac{1}{2}}``
40+
* ``\sigma_{k+1} = \sigma_k\theta_k``
41+
* ``\tau_{k+1} = \frac{\tau_k}{\theta_k}``
42+
4. ```math
43+
\bar p^{(k+1)} = \exp_{p^{(k+1)}}\bigl(-\theta_k \log_{p^{(k+1)}} p^{(k)}\bigr)
44+
```
45+
46+
Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport
47+
by a retraction, an in verse retraction and a vector transport.
48+
49+
Finally you can also update the base points $m$ and $n$ during the iterations.
50+
This introduces a few additional vector transports. The same holds for the case that
51+
$\Lambda(m^{(k)})\neq n^{(k)}$ at some point. All these cases are covered in the algorithm.
52+
53+
```@meta
54+
CurrentModule = Manopt
55+
```
56+
57+
```@docs
58+
ChambollePock
59+
```
60+
61+
## Problem & Options
62+
63+
```@docs
64+
PrimalDualProblem
65+
PrimalDualOptions
66+
ChambollePockOptions
67+
```
68+
69+
## Useful Terms
70+
71+
```@docs
72+
primal_residual
73+
dual_residual
74+
```
75+
76+
## Debug
77+
78+
```@docs
79+
DebugDualBaseIterate
80+
DebugDualBaseChange
81+
DebugPrimalBaseIterate
82+
DebugPrimalBaseChange
83+
DebugDualChange
84+
DebugDualIterate
85+
DebugDualResidual
86+
DebugPrimalChange
87+
DebugPrimalIterate
88+
DebugPrimalResidual
89+
DebugPrimalDualResidual
90+
```
91+
92+
## Record
93+
94+
```@docs
95+
RecordDualBaseIterate
96+
RecordDualBaseChange
97+
RecordDualChange
98+
RecordDualIterate
99+
RecordPrimalBaseIterate
100+
RecordPrimalBaseChange
101+
RecordPrimalChange
102+
RecordPrimalIterate
103+
```
104+
105+
## Internals
106+
107+
```@docs
108+
Manopt.update_prox_parameters!
109+
```
110+
111+
[^ChambollePock2011]:
112+
> A. Chambolle, T. Pock:
113+
> _A first-order primal-dual algorithm for convex problems with applications to imaging_,
114+
> Journal of Mathematical Imaging and Vision 40(1), 120–145, 2011.
115+
> doi: [10.1007/s10851-010-0251-1](https://dx.doi.org/10.1007/s10851-010-0251-1)

docs/src/solvers/index.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,12 @@ The following algorithms are currently available
1414

1515
| Solver | File | Problem & Option |
1616
----------|--------|-------------------|
17-
[steepest Descent](@ref GradientDescentSolver) | `gradient_descent.jl` | [`GradientProblem`](@ref), [`GradientDescentOptions`](@ref)
1817
[Cyclic Proximal Point](@ref CPPSolver) | `cyclic_proximal_point.jl` | [`ProximalProblem`](@ref), [`CyclicProximalPointOptions`](@ref)
18+
[Chambolle-Pock](@ref ChambollePockSolver) | `Chambolle-Pock` | [`PrimalDualProblem`](@ref), [`ChambollePockOptions`](@ref)
1919
[Douglas–Rachford](@ref DRSolver) | `DouglasRachford.jl` | [`ProximalProblem`](@ref), [`DouglasRachfordOptions`](@ref)
20+
[Gradient Descent](@ref GradientDescentSolver) | `gradient_descent.jl` | [`GradientProblem`](@ref), [`GradientDescentOptions`](@ref)
2021
[Nelder-Mead](@ref NelderMeadSolver) | `NelderMead.jl` | [`CostProblem`](@ref), [`NelderMeadOptions`](@ref)
22+
[Particle Swarm](@ref ParticleSwarmSolver) | `particle_swarm.jl` | [`CostProblem`](@ref), [`ParticleSwarmOptions`](@ref)
2123
[Subgradient Method](@ref SubgradientSolver) | `subgradient_method.jl` | [`SubGradientProblem`](@ref), [`SubGradientMethodOptions`](@ref)
2224
[Steihaug-Toint Truncated Conjugate-Gradient Method](@ref tCG) | `truncated_conjugate_gradient_descent.jl` | [`HessianProblem`](@ref),
2325
[`TruncatedConjugateGradientOptions`](@ref)

docs/src/solvers/truncated_conjugate_gradient_descent.md

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ $z_0 = \operatorname{P}(r_0)$, $\delta_0 = z_0$ and $k=0$
2727

2828
Repeat until a convergence criterion is reached
2929

30-
1. Set $\kappa = \langle \delta_k, \operatorname{Hess}[F] (\delta_k)_ {x} \rangle_{x}$,
30+
1. Set $\kappa = \langle \delta_k, \operatorname{Hess}[F] (\delta_k)_{x} \rangle_{x}$,
3131
$\alpha =\frac{\langle r_k, z_k \rangle_{x}}{\kappa}$ and
3232
$\langle \eta_k, \eta_k \rangle_{x}^{* } = \langle \eta_k, \operatorname{P}(\eta_k) \rangle_{x} +
3333
2\alpha \langle \eta_k, \operatorname{P}(\delta_k) \rangle_{x} + {\alpha}^2
@@ -36,8 +36,8 @@ Repeat until a convergence criterion is reached
3636
return $\eta_{k+1} = \eta_k + \tau \delta_k$ and stop.
3737
3. Set $\eta_{k}^{* }= \eta_k + \alpha \delta_k$, if
3838
$\langle \eta_k, \eta_k \rangle_{x} + \frac{1}{2} \langle \eta_k,
39-
\operatorname{Hess}[F] (\eta_k)_ {x} \rangle_{x} \leqq \langle \eta_k^{* },
40-
\eta_k^{* } \rangle_{x} + \frac{1}{2} \langle \eta_k^{* },
39+
\operatorname{Hess}[F] (\eta_k)_{x} \rangle_{x} \leqq \langle \eta_k^{* },
40+
\eta_k^{*} \rangle_{x} + \frac{1}{2} \langle \eta_k^{* },
4141
\operatorname{Hess}[F] (\eta_k)_ {x} \rangle_{x}$
4242
set $\eta_{k+1} = \eta_k$ else set $\eta_{k+1} = \eta_{k}^{* }$.
4343
4. Set $r_{k+1} = r_k + \alpha \operatorname{Hess}[F] (\delta_k)_ {x}$,
@@ -51,6 +51,7 @@ Repeat until a convergence criterion is reached
5151
The result is given by the last computed $η_k$.
5252

5353
## Remarks
54+
5455
The $\operatorname{P}(\cdot)$ denotes the symmetric, positive definite
5556
preconditioner. It is required if a randomized approach is used i.e. using
5657
a random tangent vector $\eta$ as initial
@@ -63,14 +64,16 @@ default, the preconditioner is just the identity.
6364
To step number 2: Obtain $\tau$ from the positive root of
6465
$\left\lVert \eta_k + \tau \delta_k \right\rVert_{\operatorname{P}, x} = \Delta$
6566
what becomes after the conversion of the equation to
67+
6668
````math
6769
\tau = \frac{-\langle \eta_k, \operatorname{P}(\delta_k) \rangle_{x} +
6870
\sqrt{\langle \eta_k, \operatorname{P}(\delta_k) \rangle_{x}^{2} +
6971
\langle \delta_k, \operatorname{P}(\delta_k) \rangle_{x} ( \Delta^2 -
7072
\langle \eta_k, \operatorname{P}(\eta_k) \rangle_{x})}}
7173
{\langle \delta_k, \operatorname{P}(\delta_k) \rangle_{x}}.
7274
````
73-
It can occur that $\langle \delta_k, \operatorname{Hess}[F] (\delta_k)_ {x} \rangle_{x}
75+
76+
It can occur that $\langle \delta_k, \operatorname{Hess}[F] (\delta_k)_{x} \rangle_{x}
7477
= \kappa \leqq 0$ at iteration $k$. In this case, the model is not strictly
7578
convex, and the stepsize $\alpha =\frac{\langle r_k, z_k \rangle_{x}}
7679
{\kappa}$ computed in step 1. does not give a reduction in the modelfunction
@@ -84,7 +87,7 @@ line. Thus when $\kappa \leqq 0$ at iteration k, we replace $\alpha =
8487
The other possibility is that $\eta_{k+1}$ would lie outside the trust-region at
8588
iteration k (i.e. $\langle \eta_k, \eta_k \rangle_{x}^{* }
8689
\geqq {\Delta}^2$ what can be identified with the norm of $\eta_{k+1}$). In
87-
particular, when $\operatorname{Hess}[F] (\cdot)_ {x}$ is positive definite
90+
particular, when $\operatorname{Hess}[F] (\cdot)_{x}$ is positive definite
8891
and $\eta_{k+1}$ lies outside the trust region, the solution to the
8992
trust-region problem must lie on the trust-region boundary. Thus, there
9093
is no reason to continue with the conjugate gradient iteration, as it
@@ -107,14 +110,8 @@ TruncatedConjugateGradientOptions
107110
## Additional Stopping Criteria
108111

109112
```@docs
110-
stopIfResidualIsReducedByPower
111-
```
112-
```@docs
113-
stopIfResidualIsReducedByFactor
114-
```
115-
```@docs
113+
StopIfResidualIsReducedByPower
114+
StopIfResidualIsReducedByFactor
116115
StopWhenTrustRegionIsExceeded
117-
```
118-
```@docs
119116
StopWhenCurvatureIsNegative
120117
```

0 commit comments

Comments
 (0)