You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The usual formulation of constrained optimization is
52
+
53
53
```math
54
54
\tag{P}
55
55
\begin{aligned}
@@ -58,89 +58,92 @@ The usual formulation of constrained optimization is
58
58
&h_j(x) = 0,\ j=1,\dots,J.
59
59
\end{aligned}
60
60
```
61
+
61
62
Functions ``g_i`` generate inequality constraints, while functions ``h_j`` generate equality constraints. Box constraints such as ``x\in[0,1]`` are the simplest case of the former. This optimization problem is also called the primal formulation. It is closely connected with the Lagrangian
1. Primal and dual problems switch minimization and maximization.
98
-
2. Primal and dual problems switch variables and constraints.
99
-
```@raw html
100
-
</div></div>
101
-
```
82
+
!!! info "Linear programming:"
83
+
The linear program
102
84
103
-
For the unconstrained optimization, we showed that each local minimum satisfies the optimality condition ``\nabla f(x)=0``. This condition does not have to hold for unconstrained optimization, where the optimality conditions are of a more complex form.
Let ``f``, ``g_i`` and ``h_j`` be differentiable function and let a constraint qualification hold. If ``x`` is a local minimum of the primal problem (P), then there are $\lambda\ge 0$ and $\mu$ such that
If $f$ and $g$ are convex and $h$ is linear, then every stationary point is a global minimum of (P).
119
-
```@raw html
120
-
</div></div>
121
-
```
100
+
```
101
+
102
+
We can observe several things:
103
+
1. Primal and dual problems switch minimization and maximization.
104
+
2. Primal and dual problems switch variables and constraints.
105
+
106
+
For the unconstrained optimization, we showed that each local minimum satisfies the optimality condition ``\nabla f(x)=0``. This condition does not have to hold for unconstrained optimization, where the optimality conditions are of a more complex form.
Let ``f``, ``g_i`` and ``h_j`` be differentiable function and let a constraint qualification hold. If ``x`` is a local minimum of the primal problem (P), then there are $\lambda\ge 0$ and $\mu$ such that
If $f$ and $g$ are convex and $h$ is linear, then every stationary point is a global minimum of (P).
122
120
123
121
When there are no constraints, the Lagrangian ``L`` reduces to the objective ``f``, and the optimality conditions are equivalent. Therefore, the optimality conditions for constrained optimization generalize those for unconstrained optimization.
124
122
125
123
## Numerical method
126
124
127
-
We present only the simplest method for constraint optimization. Projected gradients
125
+
We present only the simplest method for constraint optimization. Projected gradients
126
+
128
127
```math
129
128
\begin{aligned}
130
129
y^{k+1} &= x^k - \alpha^k\nabla f(x^k), \\
131
130
x^{k+1} &= P_X(y^{k+1})
132
131
\end{aligned}
133
132
```
133
+
134
134
compute the gradient as for standard gradient descent, and then project the point onto the feasible set. Since the projection needs to be simple to calculate, projected gradients are used for simple ``X`` such as boxes or balls.
The implementation of projected gradients is the same as gradient descent but it needs projection function ```P``` as input. For reasons of plotting, it returns both ``x`` and ``y``.
0 commit comments