PDMO.jl
is a powerful Julia framework for primal-dual multiblock optimization, built for rapid prototyping and high-performance computing.
- Solve Complex Problems: Model and solve problems with multiple variable blocks and linear coupling constraints.
- Highly Customizable: An open-source toolkit that is easy to adapt for your applications and specific algorithms.
- Accelerate Research: Benchmark your methods against classic and state-of-the-art solvers.
PDMO.jl
presents a unified framework for formulating and solving a MultiblockProblem
of the form:
where we have the following problem variables and data:
More specifically,
- For each
$j\in {1,\cdots,n}$ , aBlockVariable
$x_j$ represents a numeric array (i.e., scalar, vector, matrix, etc.), and is associated with two objective functions:- each
$f_j$ is differentiable, and$f_j(\cdot)$ and$\nabla f_j(\cdot)$ are available; - each
$g_j$ is proximable, and$g_j(\cdot)$ and$\text{prox}_{\gamma g_j}(\cdot)$ are available.
- each
- For each
$i \in {1,\cdots,m}$ , aBlockConstraint
is defined by some linear operators and a right-hand side array:- the linear operator
$\mathbf{A}_{i,j}$ is non-zero if and only if constraint$i$ involves blocks$x_j$ ; - the adjoint operator of
$\mathbf{A}_{i,j}$ is available; - the right-hand side
$b_i$ can be a numeric array of any shape.
- the linear operator
PDMO.jl
provides various algorithms to solve problems of the above form.
-
Alternating Direction Method of Multipliers (ADMM)
- Graph-based bipartization methods automatically generate ADMM-ready reformulations of
MultiblockProblem
. - Various ADMM variants are available:
- Original ADMM
- Doubly linearized ADMM
- Adaptive linearized ADMM
- Various algorithmic component can be selected:
- Penalty adapters, e.g., Residual Balancing, Spectral Radius Approximation
- Accelerators, e.g., Halpern (with or without restart), Filtered Anderson
- Graph-based bipartization methods automatically generate ADMM-ready reformulations of
-
Adaptive Primal-Dual Method (AdaPDM)
- A suite of efficient and adaptive methods for problems with simpler coupling.
$$\begin{aligned} \min_{\mathbf{x}} \quad & \sum_{j=1}^{n-1} \left( f_j(x_j) + g_j(x_j) \right) + g_n(\mathbf{A}_{1,1}x_1 + \cdots + \mathbf{A}_{1,n-1}x_{n-1}) \end{aligned}$$ - Various methods can be selected :
- Original Condat-Vũ Method (Condat 2013, Vũ 2013)
- Adaptive Primal-Dual Method & Plus (Latafat et al. 2024)
- Malitsky-Pock Methd (Malitsky and Pock, 2018)
- 🧱 Unified Modeling: A versatile interface for structured problems.
- 🔄 Automatic Decomposition: Intelligently analyzes and reformulates problems for supported algorithms.
- 🧩 Extensible by Design: Easily add custom functions, constraints, and algorithms.
- 📊 Modular Solvers: A rich library of classic and modern algorithms.
- ⚡ Non-Convex Ready: Equipped with features to tackle non-convexity.
Before official release, we recommend the following practice to download and use PDMO.jl
.
Download the package and navigate to the project folder:
cd PDMO
For enhanced performance, you can optionally use linear solvers from HSL:
- Obtain HSL library from https://www.hsl.rl.ac.uk/
- Set up your HSL_jll directory structure
- Edit
warmup.jl
and update the HSL path
Run the setup script:
julia warmup.jl
This will set up all required dependencies and configure HSL if available.
After successful setup:
using PDMO
We use the Dual Square Root LASSO as a beginning example:
where
To begin with, load PDMO.jl
and other necessary packages.
using PDMO
using LinearAlgebra
using SparseArrays
using Random
Next generate or load your own problem data. We use synthetic data here.
numberRows = 10
numberColumns = 20
A = sparse(randn(numberRows, numberColumns))
b = randn(numberColumns)
lambda = 1.0
Then we can generate a MultiblockProblem
for the Dual Square Root LASSO problem.
mbp = MultiblockProblem()
# add x block
block_x = BlockVariable()
block_x.f = AffineFunction(b, 0.0) # f_1(x) = <b, x>
block_x.g = IndicatorBallL2(1.0) # g_1(x) = indicator of L2 ball
block_x.val = zeros(numberColumns) # initial value
xID = addBlockVariable!(mbp, block_x) # add x block to mbp; an ID is assigned
# add z block
block_z = BlockVariable()
block_z.g = IndicatorBox(-lambda * ones(numberRows), # f_2(z) = Zero() by default
ones(numberRows) * lambda) # g_2(x) = indicator of box
block_z.val = zeros(numberRows) # initial value
zID = addBlockVariable!(mbp, block_z) # add z block to mbp; an ID is assigned
# add constraint: Ax-z=0
constr = BlockConstraint()
addBlockMappingToConstraint!(constr, xID, LinearMappingMatrix(A)) # specify the mapping of x
addBlockMappingToConstraint!(constr, zID, LinearMappingIdentity(-1.0)) # specify the mapping of z
constr.rhs = zeros(numberRows) # specify RHS
addBlockConstraint!(mbp, constr) # add constraint to mbp
Next we can run different variants of ADMM:
# run ADMM
param = ADMMParam()
param.solver = OriginalADMMSubproblemSolver()
param.adapter = RBAdapter(testRatio=10.0, adapterRatio=2.0)
param.accelerator = AndersonAccelerator()
result = runBipartiteADMM(mbp, param)
# run Doubly Linearized ADMM
param = ADMMParam()
param.solver = DoublyLinearizedSolver()
result = runBipartiteADMM(mbp, param)
# run Adaptive Linearized ADMM
param = ADMMParam()
param.solver = AdaptiveLinearizedSolver()
result = runBipartiteADMM(mbp, param)
or different adaptive primal-dual methods:
# run AdaPDM
paramAdaPDM = AdaPDMParam(mbp)
result = runAdaPDM(mbp, paramAdaPDM)
# run AdaPDMPlus
paramAdaPDMPlus = AdaPDMPlusParam(mbp)
result = runAdaPDM(mbp, paramAdaPDMPlus)
# run Malitsky-Pock
paramMalitskyPock = MalitskyPockParam(mbp)
result = runAdaPDM(mbp, paramMalitskyPock)
# run Condat-Vu
paramCondatVu = CondatVuParam(mbp)
result = runAdaPDM(mbp, paramCondatVu)
Upon termination of the selected algorithm, one can look for primal solution and iteration information through result.solution
and result.iterationInfo
, respectively.
In addition to a set of built-in functions whose gradient or proximal oracles have been implemented, PDMO.jl
supports user-defined smooth and proximable functions. Consider the function
which can be expressed as the sum of a smooth
In PDMO.jl
, this block can be constructed as follows:
block = BlockVariable()
block.f = UserDefinedSmoothFunction(
x -> x[1] + x[3]^4, # f
x -> [1.0, 0.0, 4*x[3]^3]) # ∇f
block.g = UserDefinedProximalFunction(
x -> abs(x[2]), # g
(x, gamma) -> [ # prox_{gamma g}
x[1],
sign(x[2]) * max(abs(x[2]) - gamma, 0.0),
x[3]])
block.val = zeros(3) # initial value
For comprehensive documentation, examples, and API references, visit our full documentation.
- 🔍 Classification and detection for pathological problems
- 🚀 Advanced acceleration techniques for first-order methods
- 🤖 AI coding assistant for user-defined functions
- 🛣️ Parallel, distributed, and GPU support
PDMO.jl
is open source and welcomes contributions! Please contact info@mindopt.tech for more details.
PDMO.jl
is licensed under the MIT License.