For some, Kuhn-Tucker method (also known as Karush-Kuhn-Tucker method or KKT for short) seems like a random optimization method that comes out of nowhere. The truth is, KKT multiplier is just a glorified Lagrange multiplier. I do not want to downplay the importance and relevance of KKT multiplier. It has been successfully utilized in a variety of optimization cases in miscellaneous areas. However, its mathematics is far from hard to understand (given that one understands Lagrange multiplier) and I will make my case by looking at an example:

*maximize *

*subject to *

The area defined by the constraint can be seen in the following graph:

Let us suppose that we do not know anything about KKT but we are familiar with unconstrained optimization and Lagrange multiplier, how are we going to solve this optimization problem?

One may approach this optimization problem by doing the following steps (I know a sensible person would not do these but bear with me for a second. You can skim step 2 onwards and revisit them later):

- Derive with respect to and and solve for these equal to . If the solution (say ) lies in the interior of the triangle, then compute the value of the function on this point and save them (i.e. save ) so that we can compare them with the other possible maxima later to determine the global maximum. Repeat this for all solutions. This will give us all possible local maxima in the interior. For this particular example, there is no point in the interior of the diagram making so there is nothing to save.
- Use Lagrange multiplier to get all possible maxima on line segment (but not necessarily on point and due to how Lagrange multiplier work). That is to solve:
*maximize*,*subject to*and ignore all the result with or . A typical way is to define the Lagrange function and solve Then, we’ll save all solutions as well as the value of the functions on these points. For this particular example:- The solution is and the value of the function at this point is

- Similarly, use Lagrange multiplier to get all possible maxima on line (but not necessarily on point and ):
- There is no solution.

- Again, do this for line (but not point and ):
- There is no solution.

- On point A:
- There solution is and the value of the function here is .

- On point B: …
- On poi.. OK I think you get my point: In general, we need to do a lot of Lagrange multiplier calculations and all these calculations are very similar. Sure, using Lagrange multiplier seemed like a waste of time in my example above as the function and constraints are pretty easy but usually it would not be this simple and you may have no choice.

This is where KKT makes your life (just a little bit) easier: Notice how all the Lagrange functions and their derivatives are really similar that we basically repeat many calculations over and over again? The idea of KKT is that you only need to define one Lagrange function and derive once. The Lagrange function you need to define is

that is, the Lagrange function when you consider all constraints as equalities. And you will get:

i)

ii)

iii)

iv)

v)

Why define and derive this function? Because it is really easy to get to all the derivatives in Step 1-7 from here. The idea is, since we have computed Lagrange of all constraints, we can then turn any equality on and off. For instance, if we want to find maxima in the interior just set (that is and which means we only consider the pink region inside the triangle) and on equations i-v and you will get the exact same equations we had in Step 1.

On line segment , we know that the equality sign on constraint 1 has to be satisfied (binding) while the equality signs on constraint 2 and 3 do not have to be satisfied (non-binding). Therefore, to get all possible maxima on line , just set the following values on equations i-v: (which just means that and ) and take and we will end up with the same equations in Step 3. And on point , just set and take and we shall recover equations of Step 5.

Note that in all our example above, whenever or equal 0, then will be non-zero and vice-versa. This condition you get when you turn on/off some constraints is known as complementary slackness. Do this for all possible combinations of and where each can be zero or non-zero and then compare the value of the functions on the points we got by solving all these systems of equations. In this example, there are 8 possibilities and in general there cases where is the number of constraints. In practice, you can quickly reduce the number by arguing that some cases are impossible. Take the case and all non-zero. Their complements will all be zero but this means we are considering points in the intersection of the line but we know there is no such point.

You may have realized at this point that KKT multiplier is the optimization method that let you derive once, then branches off into multiple systems of equations (by using complementary slackness condition) instead of branching off into multiple Lagrange derivations just to get the same multiple systems of equations. You may even think that you can formulate the KKT multiplier for a more general optimization problem; I really encourage you to do this and then compare your version with the ones available on many lecture notes/textbooks.

Link: A worked example of KKT multiplier