Home Wisdom teeth Lagrange method for solving extrema. Conditional optimization

Lagrange method for solving extrema. Conditional optimization

Lagrange Multiplier Method is a classical method for solving mathematical programming problems (in particular, convex programming). Unfortunately, the practical application of the method may encounter significant computational difficulties that narrow the scope of its use. We consider the Lagrange method here mainly because it is an apparatus that is actively used to substantiate various modern numerical methods that are widely used in practice. As for the Lagrange function and the Lagrange multipliers, they play an independent and exclusively important role in theory and applications not only of mathematical programming.

Consider the classical optimization problem

max (min) z=f(x) (7.20)

This problem stands out from problem (7.18), (7.19) in that among the restrictions (7.21) there are no inequalities, there are no conditions for the variables to be non-negative, their discreteness, and the functions f(x) are continuous and have partial derivatives of at least the second order.

The classical approach to solving problem (7.20), (7.21) gives a system of equations ( the necessary conditions), which must be satisfied by the point x*, which provides the function f(x) with a local extremum on the set of points satisfying constraints (7.21) (for the convex programming problem, the found point x*, in accordance with Theorem 7.6, will simultaneously be the point of the global extremum).

Let us assume that at point x* function (7.20) has a local conditional extremum and the rank of the matrix is ​​equal to . Then the necessary conditions will be written in the form:

(7.22)

there is a Lagrange function; - Lagrange multipliers.

There are also sufficient conditions under which the solution of the system of equations (7.22) determines the extremum point of the function f(x). This question is resolved based on the study of the sign of the second differential of the Lagrange function. However, sufficient conditions are mainly of theoretical interest.

You can specify the following procedure for solving problem (7.20), (7.21) using the Lagrange multiplier method:

1) compose the Lagrange function (7.23);

2) find the partial derivatives of the Lagrange function with respect to all variables and set them equal to zero. This will result in system (7.22), consisting of equations. Solve the resulting system (if this turns out to be possible!) and thus find all the stationary points of the Lagrange function;

3) from stationary points taken without coordinates, select points at which the function f(x) has conditional local extrema in the presence of restrictions (7.21). This choice is made, for example, using sufficient conditions local extremum. Often the study is simplified if specific conditions of the problem are used.



Example 7.3. Find the optimal distribution of a limited resource in a units. between n consumers, if the profit received from allocating x j units of resource to the jth consumer is calculated by the formula .

Solution. The mathematical model of the problem has the following form:


We compose the Lagrange function:

.

We find partial derivatives of the Lagrange function and equate them to zero:

Solving this system of equations, we get:

Thus, if the jth consumer is allocated units. resource, then the total profit will reach its maximum value and amount to den. units

We examined the Lagrange method as applied to a classical optimization problem. This method can be generalized to the case where the variables are non-negative and some constraints are given in the form of inequalities. However, this generalization is primarily theoretical and does not lead to specific computational algorithms.

In conclusion, let us give the Lagrange multipliers an economic interpretation. To do this, let us turn to the simplest classical optimization problem

max (min) z=f(x 1 , X 2); (7.24)

𝜑(x 1, x 2)=b. (7.25)

Let us assume that the conditional extremum is reached at point . Corresponding extreme value of the function f(x)

Let us assume that in restrictions (7.25) the quantity b can change, then the coordinates of the extremum point, and therefore the extreme value f* functions f(x) will become quantities depending on b, i.e. ,, and therefore the derivative of function (7.24)

Consider a linear inhomogeneous differential equation of the first order:
(1) .
There are three ways to solve this equation:

  • method of variation of constant (Lagrange).

Let us consider the solution of a first-order linear differential equation by the Lagrange method.

Method of variation of constant (Lagrange)

In the variation of constant method, we solve the equation in two steps. At the first stage we simplify original equation and solve the homogeneous equation. At the second stage, we replace the integration constant obtained at the first stage of the solution with a function. Then we look for common decision original equation.

Consider the equation:
(1)

Step 1 Solving a homogeneous equation

We are looking for a solution to the homogeneous equation:

This is a separable equation

We separate the variables - multiply by dx, divide by y:

Let's integrate:

Integral over y - tabular:

Then

Let's potentiate:

Let's replace the constant e C with C and remove the modulus sign, which comes down to multiplying by a constant ±1, which we will include in C:

Step 2 Replace the constant C with the function

Now let's replace the constant C with a function of x:
C → u (x)
That is, we will look for a solution to the original equation (1) as:
(2)
Finding the derivative.

According to the rule of differentiation of a complex function:
.
According to the product differentiation rule:

.
Substitute into the original equation (1) :
(1) ;

.
Two members are reduced:
;
.
Let's integrate:
.
Substitute in (2) :
.
As a result, we obtain a general solution to a first-order linear differential equation:
.

An example of solving a first-order linear differential equation by the Lagrange method

Solve the equation

Solution

We solve the homogeneous equation:

We separate the variables:

Multiply by:

Let's integrate:

Tabular integrals:

Let's potentiate:

Let's replace the constant e C with C and remove the modulus signs:

From here:

Let's replace the constant C with a function of x:
C → u (x)

Finding the derivative:
.
Substitute into the original equation:
;
;
Or:
;
.
Let's integrate:
;
Solution of the equation:
.

Parameter name Meaning
Article topic: Lagrange method.
Rubric (thematic category) Mathematics

Finding a polynomial means determining the values ​​of its coefficient . To do this, using the interpolation condition, you can form a system of linear algebraic equations(SLAU).

The determinant of this SLAE is usually called the Vandermonde determinant. The Vandermonde determinant is not equal to zero for for , that is, in the case when there are no matching nodes in the lookup table. However, it can be argued that the SLAE has a solution and this solution is unique. Having solved the SLAE and determined the unknown coefficients you can construct an interpolation polynomial.

A polynomial that satisfies the interpolation conditions, when interpolated by the Lagrange method, is constructed in the form of a linear combination of polynomials of the nth degree:

Polynomials are usually called basic polynomials. In order to Lagrange polynomial satisfies the interpolation conditions, it is extremely important that its basis polynomials satisfy following conditions:

For .

If these conditions are met, then for any we have:

Moreover, the fulfillment of the specified conditions for the basis polynomials means that the interpolation conditions are also satisfied.

Let us determine the type of basis polynomials based on the restrictions imposed on them.

1st condition: at .

2nd condition: .

Finally, for the basis polynomial we can write:

Then, substituting the resulting expression for the basis polynomials into the original polynomial, we obtain the final form of the Lagrange polynomial:

A particular form of the Lagrange polynomial at is usually called the linear interpolation formula:

.

The Lagrange polynomial taken at is usually called the quadratic interpolation formula:

Lagrange method. - concept and types. Classification and features of the category "Lagrange method." 2017, 2018.

  • - Lagrange method (method of variation of an arbitrary constant).

    Linear remote controls.


  • Definition. DU type i.e. linear with respect to an unknown function and its derivative is called linear.

    For a solution of this type, we will consider two methods: the Lagrange method and the Bernoulli method. Consider a homogeneous differential equation. This equation is with separable variables. The solution of the equation is General... . - Linear control systems, homogeneous and heterogeneous. The concept of general decision. Lagrange method of variation of production constants. Definition. A control system is called homogeneous if the function can be represented as the relationship between its arguments. Example.


  • F-I'm called

    homogeneous fth measurements if Examples: 1) - 1st order of homogeneity. 2) - 2nd order of homogeneity.


  • 3) - zero order of homogeneity (simply homogeneous... .

    - Lecture 8. Application of partial derivatives: extremum problems. Lagrange method.

  • Extremum problems have

    great importance

    in economic calculations. This is the calculation, for example, of maximum income, profit, minimum costs depending on several variables: resources, production assets, etc. The theory of finding extrema of functions... . 1 - T.2.3. DE of higher orders. Equation in total differentials. T.2.4. Linear differential equations of the second order with constant coefficients. Lagrange method.. 3. 2. 1. DE with separable variables S.R. 3. In natural sciences, technology and economics, one often has to deal with empirical formulas, i.e. formulas compiled based on the processing of statistical data or... LAGRANGE METHOD

    using a non-degenerate linear transformation of variables. L. m. consists of the following. We can assume that not all coefficients of form (1) are equal to zero.

    Therefore, two cases are possible. 1) For some g,

    diagonal Then where the form f 1 (x) does not contain a variable x g . 2) If everything But


    That where the form f 2 (x) does not contain two variables x g And x h .


    The forms under the square signs in (4) are linearly independent. By applying transformations of the form (3) and (4), form (1) after a finite number of steps is reduced to the sum of squares of linearly independent linear forms. Using partial derivatives, formulas (3) and (4) can be written in the form Lit. : G a n t m a k h e r F. R., Theory of matrices, 2nd ed., M., 1966; K u r o sh A. G., Course of Higher Algebra, 11th ed., M., 1975; Alexandrov P. S., Lectures on analytical geometry..., M., 1968.


    I. V. Proskuryakov. Mathematical encyclopedia. - M.: Soviet Encyclopedia

    .

      I. M. Vinogradov. 1977-1985. See what the "LAGRANGE METHOD" is in other dictionaries:

      I. M. Vinogradov. Lagrange method

    Mandelstam Nadezhda: biography and memoirs “Comrade of dark days”

    >

    American proverbs and sayings American sayings with translation