In the previous post, I have briefly outlined how to solve a constrained nonlinear problem using unconstrained nonlinear programming; in particular, using exterior penalty methods. In this post, I’ll present an alternative approach called interior penalty methods (or barrier function methods).
Let’s revisit the optimisation problem introduced in the previous post. Suppose you want to minimise the function \(f(x) = x^2\) subject to the constraint \(x \geq 1\). The function attains its minimum at \(x = 1\); however, if considered without the constraint, the minimum (clearly) shifts to \(x = 0\). Recall that the unconstrained optimisation method will find the latter as the solution to the problem since we can’t enforce the feasible region on the unconstrained optimisation algorithm.
However, we can trick the algorithm into converging on the desired solution using the so-called interior penalty function [1]. The function’s aim is to penalise the unconstrained optimisation method if it tries to leave (cross the boundary of) the feasible region of the problem. Hence, with this approach, we are required to start searching for the optimal solution of the problem within the feasible region. This is in direct opposition to the exterior penalty methods where we start outside the feasible region and slowly converge on a minimum from the outside.
Let’s create an interior penalty function for our example:
\[\begin{equation} F(x,\rho^k) = x^2 - \rho^k\ln(x-1) \end{equation}\]The first derivative of \(F\) with respect to \(x\) looks as follows:
\[\begin{equation} \frac{\partial F}{\partial x} = 2x - \rho^k\cdot\frac{1}{x - 1} \end{equation}\]Equating the derivative to zero, and noting that by definition \(x > 1\) (since we start in the feasible region of the problem), yields the solution to the modified optimisation problem:
\[\begin{equation} x = \frac{1}{2} + \frac{\sqrt{1 + 2\rho^k}}{2} \end{equation}\]Note that as \(\rho^k \rightarrow 0\) with \(k\rightarrow\infty\), the solution converges to \(x = 1\). That is, the solution to the original optimisation problem!
This method can readily be converted into a numerical algorithm that uses an unconstrained optimisation method. Pick a number \(\rho\) such that \(0 < \rho < 1\). Starting from \(k = 1\), minimise \(F(x, \rho^k)\) using any unconstrained optimisation method. Finally, feed in the results of the minimisation as the new starting point to the minimisation method, and increment \(k\).
To finish off this blog post, an exemplary implementation of the interior penalty method in Cython and Python. Like in the previous post, numerical minimisation is performed using algorithms provided by the GNU Scientific Library.
The objective function to be minimised writted in Cython:
And C extension to Python that uses Brent’s minimisation method provided by the GNU Scientific Library:
Assuming the Cython module containing min_f
and solve
functions is called interior
, the following Python script demonstrates it in action:
With the following output generated:
Clearly, as the penalty, \(\rho^k\), decreases to zero, the solution to the modified optimisation problem approaches the solution to the original problem.
[1] Avriel, M., “Nonlinear Programming: Analysis and Methods”, chapter 12, Dover Publications, 2003.