Multivariable Optimization With Constraints

MULTIVARIABLE OPTIMIZATION WITH CONSTRAINTS

Optimization

introduction

When working through examples, you might wonder why we bother writing out the Lagrangian at all. Wouldn't it be easier to just start with these two equations rather than re-establishing them from \nabla \mathcal{L}=0 every time? The short answer is yes, it would be easier. If you find yourself solving a constrained optimization problem by hand, and you remember the idea of gradient alignment, feel free to go for it without worrying about the Lagrangian.
In practice, it's often a computer solving these problems, not a human. Given that there are many highly optimized programs for finding when the gradient of a given function is 00, it's both clean and useful to encapsulate our problem into the equation \nabla \mathcal{L} = 0.
Furthermore, the Lagrangian itself, as well as several functions deriving from it, arise frequently in the theoretical study of optimization. In this light, reasoning about the single object \mathcal{L} rather than multiple conditions makes it easier to see the connection between high-level​ ideas. Not to mention, it's quicker to write down on a blackboard.
In either case, whatever your future relationship with constrained optimization might be, it is good to be able to think about the Lagrangian itself and what it does. The examples above illustrate how it works, and hopefully help to drive home the point that \nabla \mathcal{L} = 0 encapsulates both \nabla f = \lambda \nabla gdel, f, equals, lambda, del, g and g(x, y) = cg, left parenthesis, x, comma, y, right parenthesis, equals, c in a single equation.For problems where the number of constraints is one less than the number of variables (ie every example we've gone over except the unit vector one), is there a reason why we can't just solve the system of equations of the function and constraint? ie the result is a single-variable function; take its derivative and set to 0.