Optimization

Optimization is a branch of mathematics concerned with finding maximum possible improvement. With a function describing the quantity we want to enhance, optimization refers to finding the inputs that correspond to extrema. In practice, all input combinations are not always feasible, and only local extrema may be available. This is referred to as constrained optimization.

Innehållsförteckning

    Intro

    In financial mathematics, the modern portfolio theory is a mathematical method of picking financial assets (stocks, bonds, etc) in the best way.

    What it really comes down to is to solve an optimization problem where you want to pick assets to your portfolio such that

    1. The expected returns are maximized

    2. The risk is minimized

    The theory was invented in the fifties by the economist Harry Markovitz. The theory has since then seen widespread use and Markovitz later received the Nobel prize in economics for his work.

    Koncept

    Imagine a stressed consultant driving to work.

    The consultant want to optimize everything and on his drive to work he wants to minimize the money he spends on the commuting. There are three factor the consultant must think about.

    1. The time the consultant spend driving he can not work

    2. The faster he drive the more fuel he will use

    3. Acceleration and deceleration is also a waste of fuel

    Note that we have problem dependent on time, position, speed of the car and acceleration. To find the optimal way of driving in the example above we could use Euler Lagrange.

    Euler Lagrange is a method of finding optima that is widely used in math, finance, physics and chemistry.

    In the following lecture notes we will study the Lagrange multiplier that could be seen as a first step to understand Euler Lagrange more general method.

    Summering

    A typical optimization problem can look like this

    The function is the function that we want to optimize and is called the objective function. The function represents the constraints that we have to consider.

    We only consider two types of constraints in this course, they are

    1. Equality constraints: .

    2. Inequality constraints: .

    Problems with equality constraints are solved with Lagrangian multiplier method.

    Problems with inequality constraints are solved by finding extrema of the function.

    Optimization, restricted domains

    Optimization is all about finding the points where a function takes on its minimum or maximum value. Similar to optimization in one variable, such points can occur at three places:

    1. At a critical point

    2. At a singular point

    3. At the boundary of the domain

    Note that not all functions have a domain with boundary points. In particular, the domain must be a closed and bounded set.
    Closed and bounded set:

    Not closed but bounded set:

    In many cases of optimization though, a function's full domain is not of interest since it may contain unrealistic values of the variables in practice.

    Therefore, optimization problems often have restricted domains, where we can be sure to find such extrema by recalling the sufficient conditions for their existence:

    1. is a continuous function of variables.

    2. The domain of is a closed and bounded set.

    An optimization problem of this type is relatively simple, and there's a three-step recipe to solve it:

    1. Determine any critical or singular points of inside .

    2. Determine potential extrema on the boundary of .

    3. Evaluate at all points found in the previous steps, and choose the maximum or minimum you are after.

    The second step here may not seem straight forward at first, so let us examine how this is done.

    The best method is often to do a curve parametrization of the boundary, express as a function of the parameters, and find any extrema along this curve.

    Here, the curve can be parametrized piece-wise. In such a case, do not forget to examine the end points of each separate piece!

    Example

    Find the maximum of the function on the triangle with corners in the origin, and .

    Let's start by finding the critical points of the function.

    From this we conclude that there are no critical points inside the domain. Let's now check the boundary.

    The boundary consists of three straight lines, two of them being along the - and -axes where the function is zero.

    Now let's look at along the third line

    Here,

    and so

    which means that f has an extrema at .

    Since there were no critical points inside the triangle or elsewhere on the boundary, we just have to check the function's value here and in the end points of the piece-wise defined boundary.

    These all lie on the - or -axis, where the function is zero, and so since we know that our maximum occurs at .

    Optimization, unrestricted domains

    You're the manager of a car factory, and you've got yourself this revenue function, . Revenues depend on the amount of steel available and the level of alertness of your employees - our and . Let's also suppose you know a bit of witch craft, so you can create an infinite supply of steel. You've also cast a spell to make your employees infinitely focused. Your spell is more powerful than coffee!

    This is a case of unconstrained optimization. You want to maximise your revenue function , and you can tweak and as you please. But bear in mind that if you make your employees too focused on their work, the police might suspect you're drugging them or something. And if you get thrown in prison, your revenues will plummet. You might not want to ramp up and to infinity. So you must be street smart.

    In these kinds of problems, it's not certain there's an optimum. You'll want to transform the problem into a constrained optimization problem, and then analyze the function behavior.

    Example

    Find the maximum and minimum value of the function

    Solution:
    If we look at the function , is always positive. The sign of depends only on , which is

    1. if

    2. if

    We begin with the "easy" case, to look for a minimum. If we perform the change of variables

    we can see that approaches as . Thus, has no minimum value.

    As for the maximum value, we will search for critical points inside the circle .

    with the two solutions

    thus has a maximum at

    Example

    An infinite river that is described by

    has been polluted. The concentration of the pollutant is described by

    The river is assumed to be still. Were in the river, is the concentration of the pollutant the highest?

    Solution:
    First, given an arbitrary , we want to find at what the concentration is the highest. Thus, define the function

    Differentiating

    Setting the derivative to zero

    If we take a look at the sign table,

    we can see that has a maximum for . This means that the concentration of pollutant is highest in the middle of the river.

    Now, we want to find at what the concentration is the highest with . That is, we want to maximize

    Differentiating

    Taking a look at the sing table,

    we can see that the concentration is highest when . Thus, the concentration of the pollutant is highest in the point .

    Lagrange multipliers

    Up until now, we've been striving to optimize functions on some domain, without any other conditions bothering us in our work.

    Often, in real life, we don't have that freedom; there tends to be someone saying: do this as good as possible, but you can only walk on the edges of the tiles to get there, or you have to sing opera while you do it, or you also need to do it as cheap as possible.

    That is what we call constrained optimization, and problems of this type exist on both restricted and unrestricted domains.

    This is the generic form of a constrained optimization problem:
    find the maximum/minimum of , such that .

    Assume that is differentiable and that the max/min is not on the boundary of the domain of . Then, it turns out that the solution to this problem lies where the gradients of and are parallel.

    At the maximum of constrained by , the gradients of and are parallel.

    How are and even related, you might ask yourself. Well, the condition is actually a restriction of the domain of . In the picture below, the only values from the domain of which we are allowed to use as we search for the max/min, are those on the line drawn by the curve in the domain.

    We have already encountered constrained optimization, as a matter of fact: as we tried finding max and min points along the boundary of a function on a restricted domain.

    Thus, we have already seen some methods which can be employed when solving such problems. The easiest way is to express the curve on parameter form, and thereafter solve the problem like a single variable problem.

    However, often it's impossible to isolate one variable from the others. Then, we need to bring out our secret weapon, the Langrange multiplier.

    We'll stay in two variables and with one condition for now. The method extends to higher dimensions. We'll also assume for the rest of this note that the function is differentiable.

    The Lagrange multiplier

    The Lagrange multiplier doesn't look like much to the world: it's denoted with the little Greek letter and we rarely care about it's value. Moreover, we often try to eliminate it as fast as possible from the equations where it appears.

    However, this little fellow which we treat so ungratefully is the key to the whole method of optimization with constraints.

    Cutting to the point now, we will show that where the max or min of is, under the condition , the gradients of and are parallel. In other words, at the max/min points, we have:

    Here, is a number, and the equation is the mathematical way of saying the gradients are parallel.

    A walk-through example

    To justify the above statement, let's look at a mountain. With the sun shining on it more on one side than the other, the glacier on the top has melted more on that side. The edge of the glacier forms an ellipse.

    We take a theoretical stroll around the mountain, following the edge of the glacier. We'd like to find out what is the highest point we'll reach.

    This is equivalent to maximizing , only without leaving the curve .

    Assume that the highest point on the path is a point , and that .

    Then, it is possible to find a parametrization of a part of the level curve close to . You will get a better understanding of why we need that condition on the gradient as we study implicit functions shortly. For now, you may just accept this as a fact.

    The function of one variable has a local extremum for the -value for which ; we've just changed the names of things here! Specifically, at . Differentiating using the chain rule on gives:

    But this must mean the vector is perpendicular to the tangent to the curve at . Since the gradient of is perpendicular to the tangent, as well, we can conclude that the gradients of and are parallel at .

    Let's put this into action and find the highest point we can reach on the mountain as we walk on the path.

    In the example, describes a circle:

    This circle is the elliptic curve-path projected onto the -plane in the domain and a level curve to .

    The mountain is modeled as a paraboloid with the equation:

    We said that the gradients of and are parallel, at the max and min points of on . Calculating and , we get:

    This gives rise to a system of equations:

    As both equations are equal to , take the left hand side and set them equal to each other. Then, disappears, and we get .

    The points where on the curve are and . The first one is the maximum, and the second one the minimum, of on the curve!

    Another example follows below.

    Example

    To illuminate the process of using the Lagrange multiplier, take a look at the following constrained optimization problem

    is our . Next, remember that we want to look for points where the gradients of and are parallel, that is, we want to solve the equation

    Calculating the gradients, we get

    The above vector equation constitutes of two scalar equations

    When solving problems with Lagrange multipliers, we want to eliminate lambda as soon as possible.

    Get rid of as soon as you can.

    Here, this can be done by setting the two scalar equation equal to each other:

    can't be an optimum to our problem since this point does not satisfy the constraint.

    Furthermore, neither nor can't be optima since these points will render and it is clear that if . Thus, we can divide by and to get

    Now, use this together with the constraints to solve for the optimal points

    Solving for and , we get that

    is the optimal points.

    Innehållsförteckning
      Gillar du det vi gör? Hjälp oss och dela detta avsnitt.

      En matteapp som hjälper dig att lyckas

      Vissa använder Elevri som kompletterande material för sina studier. Andra använder bara Elevri. Vårt uppdrag är att inspirera, coacha och göra matematik tydlig.

      common:appPromoteSection.imageAlt

      Vissa använder Elevri som kompletterande material för sina studier. Andra använder bara Elevri. Vårt uppdrag är att inspirera, coacha och göra matematik tydlig.

      Apple logo
      Google logo
      © 2024 Elevri. All rights reserved.