Interior Point Methods are a class of optimization algorithms for solving linear or nonlinear programming problems.
It finds the optimum solution by moving inside the polygon rather than moving around its surface.
History:
In 1984, Karmarkar "invented" interior-point method.
In 1985, Affine-scalling method was "invented" as an intuitive version of Karmarkar's algorithm.
In 1989, it was realized that Dikin(USSR) invented Affine-scaling(Barrier method) in 1967.
Interior point method was the first practical polynomial time algorithm for solving linear programing problems. Ellipsoid method's run time is polynomial, but in practice, the Interior Point Method and variants of Simplex Methods are much faster.
Here goes technical in summary:
1. Primal objective with log barrier function: G(μ) = cx - μ Σ ln(xj)
2. Central path algorithm: μ from infinity to 0.
3. Min with constraint Ax=b?
∇G(μ) perpendicular to Ax=b;
<cj-μ/xj> is linear combination of A's rows;
<cj-μ/xj> = yA for some y;
Let sj = μ/xj, then
yA + s = c ==> yA ≤ c, this the dual constraints.
4, Duality gap: cx - yb = (yA+s)x - y (Ax)
= sx
= nμ
5. Conversely, if all sj xj = μ, then on central path.
To follow Central Path, use "predictor-corrector".
6. Improvement direction? "Affine-scaling"
From current x, s, μ ==> x+dx, s+ds, μ+dμ
==> sj dxj + xj dsj = dμ (1)
Also A(x+dx) = b ==> Adx = 0 (2)
yA + s = c ==> (dy)A + ds = 0 (3)
To solve (1)-(3), rescale "affine scaling", all xj = 1 ==> sj = μ
The equations say
μdx + ds = 1dμ
Adx = 0 ==> dx A
(dy)A + ds = 0 ==> ds A
==> project 1dμ into A and A
Note: some of the information comes from course "Advanced Algorithms" as follows:
MIT 6.854/18.415J: Advanced Algorithms (Fall 2014, David Karger)
MIT 6.854/18.415 Advanced Algorithms (Spring 2016, Ankur Moitra)