Local extrema. Extremum of a function of several variables The concept of extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions

Extrema of functions of several variables. A necessary condition for an extremum. Sufficient condition for an extremum. Conditional extremum. Lagrange multiplier method. Finding the largest and smallest values.

Lecture 5.

Definition 5.1. Dot M 0 (x 0, y 0) called maximum point functions z = f (x, y), If f (x o , y o) > f(x,y) for all points (x, y) M 0.

Definition 5.2. Dot M 0 (x 0, y 0) called minimum point functions z = f (x, y), If f (x o , y o) < f(x,y) for all points (x, y) from some neighborhood of a point M 0.

Note 1. The maximum and minimum points are called extremum points functions of several variables.

Remark 2. The extremum point for a function of any number of variables is determined in a similar way.

Theorem 5.1(necessary conditions for an extremum). If M 0 (x 0, y 0)– extremum point of the function z = f (x, y), then at this point the first-order partial derivatives of this function are equal to zero or do not exist.

Proof.

Let's fix the value of the variable at, counting y = y 0. Then the function f (x, y 0) will be a function of one variable X, for which x = x 0 is the extremum point. Therefore, by Fermat's theorem, or does not exist. The same statement is proved similarly for .

Definition 5.3. Points belonging to the domain of a function of several variables at which the partial derivatives of the function are equal to zero or do not exist are called stationary points this function.

Comment. Thus, the extremum can only be reached at stationary points, but it is not necessarily observed at each of them.

Theorem 5.2(sufficient conditions for an extremum). Let in some neighborhood of the point M 0 (x 0, y 0), which is a stationary point of the function z = f (x, y), this function has continuous partial derivatives up to the 3rd order inclusive. Let us denote Then:

1) f(x,y) has at the point M 0 maximum if AC–B² > 0, A < 0;

2) f(x,y) has at the point M 0 minimum if AC–B² > 0, A > 0;

3) there is no extremum at the critical point if AC–B² < 0;



4) if AC–B² = 0, further research is needed.

Proof.

Let us write the second order Taylor formula for the function f(x,y), remembering that at a stationary point the first-order partial derivatives are equal to zero:

Where If the angle between the segment M 0 M, Where M (x 0 +Δ x, y 0 +Δ at), and the O axis X denote φ, then Δ x =Δ ρ cos φ, Δ y =Δρsinφ. In this case, Taylor's formula will take the form: . Let Then we can divide and multiply the expression in brackets by A. We get:

Let us now consider four possible cases:

1) AC-B² > 0, A < 0. Тогда , и at sufficiently small Δρ. Therefore, in some neighborhood M 0 f (x 0 + Δ x, y 0 +Δ y)< f (x 0 , y 0), that is M 0– maximum point.

2) Let AC–B² > 0, A > 0. Then , And M 0– minimum point.

3) Let AC-B² < 0, A> 0. Consider the increment of arguments along the ray φ = 0. Then from (5.1) it follows that , that is, when moving along this ray, the function increases. If we move along a ray such that tg φ 0 = -A/B, That , therefore, when moving along this ray, the function decreases. So, period M 0 is not an extremum point.

3`) When AC–B² < 0, A < 0 доказательство отсутствия экстремума проводится

similar to the previous one.

3``) If AC–B² < 0, A= 0, then . At the same time. Then for sufficiently small φ the expression 2 B cosφ + C sinφ is close to 2 IN, that is, it retains a constant sign, but sinφ changes sign in the vicinity of the point M 0. This means that the increment of the function changes sign in the vicinity of a stationary point, which is therefore not an extremum point.

4) If AC–B² = 0, and , , that is, the sign of the increment is determined by the sign of 2α 0. At the same time, further research is necessary to clarify the question of the existence of an extremum.

Example. Let's find the extremum points of the function z = x² - 2 xy + 2y² + 2 x. To find stationary points, we solve the system . So, the stationary point is (-2,-1). At the same time A = 2, IN = -2, WITH= 4. Then AC–B² = 4 > 0, therefore, at a stationary point an extremum is reached, namely a minimum (since A > 0).

Definition 5.4. If the function arguments f (x 1 , x 2 ,…, x n) are bound by additional conditions in the form m equations ( m< n) :

φ 1 ( x 1, x 2,…, x n) = 0, φ 2 ( x 1, x 2,…, x n) = 0, …, φ m ( x 1, x 2,…, x n) = 0, (5.2)

where the functions φ i have continuous partial derivatives, then equations (5.2) are called connection equations.

Definition 5.5. Extremum of the function f (x 1 , x 2 ,…, x n) when conditions (5.2) are met, it is called conditional extremum.

Comment. We can offer the following geometric interpretation of the conditional extremum of a function of two variables: let the arguments of the function f(x,y) related by the equation φ (x,y)= 0, defining some curve in the O plane xy. Reconstructing perpendiculars to plane O from each point of this curve xy until it intersects with the surface z = f (x,y), we obtain a spatial curve lying on the surface above the curve φ (x,y)= 0. The task is to find the extremum points of the resulting curve, which, of course, in the general case do not coincide with the unconditional extremum points of the function f(x,y).

Let us determine the necessary conditions for a conditional extremum for a function of two variables by first introducing the following definition:

Definition 5.6. Function L (x 1 , x 2 ,…, x n) = f (x 1 , x 2 ,…, x n) + λ 1 φ 1 (x 1 , x 2 ,…, x n) +

+ λ 2 φ 2 (x 1 , x 2 ,…, x n) +…+λ m φ m (x 1 , x 2 ,…, x n), (5.3)

Where λ i – some are constant, called Lagrange function, and the numbers λiindefinite Lagrange multipliers.

Theorem 5.3(necessary conditions for a conditional extremum). Conditional extremum of a function z = f (x, y) in the presence of the coupling equation φ ( x, y)= 0 can only be achieved at stationary points of the Lagrange function L (x, y) = f (x, y) + λφ (x, y).

Proof. The coupling equation specifies an implicit relationship at from X, therefore we will assume that at there is a function from X: y = y(x). Then z there is a complex function from X, and its critical points are determined by the condition: . (5.4) From the coupling equation it follows that . (5.5)

Let us multiply equality (5.5) by some number λ and add it with (5.4). We get:

, or .

The last equality must be satisfied at stationary points, from which it follows:

(5.6)

A system of three equations for three unknowns is obtained: x, y and λ, and the first two equations are the conditions for the stationary point of the Lagrange function. By excluding the auxiliary unknown λ from system (5.6), we find the coordinates of the points at which the original function can have a conditional extremum.

Remark 1. The presence of a conditional extremum at the found point can be checked by studying the second-order partial derivatives of the Lagrange function by analogy with Theorem 5.2.

Remark 2. Points at which the conditional extremum of the function can be reached f (x 1 , x 2 ,…, x n) when conditions (5.2) are met, can be defined as solutions of the system (5.7)

Example. Let's find the conditional extremum of the function z = xy given that x + y= 1. Let's compose the Lagrange function L(x, y) = xy + λ (x + y – 1). System (5.6) looks like this:

Where -2λ=1, λ=-0.5, x = y = -λ = 0.5. At the same time L(x,y) can be represented in the form L(x, y) = - 0,5 (x–y)² + 0.5 ≤ 0.5, therefore at the found stationary point L(x,y) has a maximum, and z = xy – conditional maximum.

Definition1: A function is said to have a local maximum at a point if there is a neighborhood of the point such that for any point M with coordinates (x, y) inequality holds: . In this case, i.e., the increment of the function< 0.

Definition2: A function is said to have a local minimum at a point if there is a neighborhood of the point such that for any point M with coordinates (x, y) inequality holds: . In this case, i.e., the increment of the function > 0.

Definition 3: The points of local minimum and maximum are called extremum points.

Conditional Extremes

When finding extrema of a function of many variables, problems often arise related to the so-called conditional extremum. This concept can be explained using the example of a function of two variables.

Let a function and a line be given L on the plane 0xy. The task is to get on the line L find such a point P(x, y), in which the value of a function is the largest or smallest compared to the values ​​of this function at points on the line L, located near the point P. Such points P are called conditional extremum points functions on line L. In contrast to the usual extremum point, the value of the function at the conditional extremum point is compared with the values ​​of the function not at all points of its neighborhood, but only at those that lie on the line L.

It is absolutely clear that the point of the usual extremum (they also say unconditional extremum) is also a conditional extremum point for any line passing through this point. The converse, of course, is not true: the conditional extremum point may not be the ordinary extremum point. Let me explain what I said with a simple example. The graph of the function is the upper hemisphere (Appendix 3 (Fig. 3)).

This function has a maximum at the origin; the vertex corresponds to it M hemispheres. If the line L there is a line passing through the points A And IN(her equation x+y-1=0), then it is geometrically clear that for the points of this line the greatest value of the function is achieved at the point lying in the middle between the points A And IN. This is the point of conditional extremum (maximum) of the function on this line; it corresponds to point M 1 on the hemisphere, and from the figure it is clear that there can be no talk of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, we have to find the extreme values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the conditional extremum problem.

Let us now proceed to the practical search for the conditional extremum points of the function Z= f(x, y) provided that the variables x and y are related by the equation (x, y) = 0. We will call this relation the connection equation. If from the coupling equation y can be expressed explicitly in terms of x: y=(x), we obtain a function of one variable Z= f(x, (x)) = Ф(x).

Having found the value x at which this function reaches an extremum, and then determined from the connection equation the corresponding y values, we obtain the desired points of the conditional extremum.

So, in the above example, from the relation equation x+y-1=0 we have y=1-x. From here

It is easy to check that z reaches its maximum at x = 0.5; but then from the connection equation y = 0.5, and we get exactly the point P, found from geometric considerations.

The problem of a conditional extremum can be solved very simply when the connection equation can be represented by parametric equations x=x(t), y=y(t). Substituting expressions for x and y into this function, we again come to the problem of finding the extremum of a function of one variable.

If the coupling equation has a more complex form and we are unable to either explicitly express one variable in terms of another or replace it with parametric equations, then the task of finding a conditional extremum becomes more difficult. We will continue to assume that in the expression of the function z= f(x, y) the variable (x, y) = 0. The total derivative of the function z= f(x, y) is equal to:

Where the derivative y` is found using the rule of differentiation of the implicit function. At the points of the conditional extremum, the found total derivative must be equal to zero; this gives one equation relating x and y. Since they must also satisfy the coupling equation, we get a system of two equations with two unknowns

Let's transform this system to a much more convenient one by writing the first equation in the form of a proportion and introducing a new auxiliary unknown:

(the minus sign in front is for convenience). From these equalities it is easy to move to the following system:

f` x =(x,y)+` x (x,y)=0, f` y (x,y)+` y (x,y)=0 (*),

which, together with the connection equation (x, y) = 0, forms a system of three equations with unknowns x, y and.

These equations (*) are easiest to remember using the following rule: in order to find points that can be points of the conditional extremum of the function

Z= f(x, y) with the connection equation (x, y) = 0, you need to form an auxiliary function

F(x,y)=f(x,y)+(x,y)

Where is some constant, and create equations to find the extremum points of this function.

The indicated system of equations provides, as a rule, only the necessary conditions, i.e. not every pair of values ​​x and y that satisfies this system is necessarily a conditional extremum point. I will not give sufficient conditions for the conditional extremum points; very often the specific content of the problem itself suggests what the found point is. The described technique for solving problems on a conditional extremum is called the Lagrange multiplier method.

CONDITIONAL EXTREME

The minimum or maximum value achieved by a given function (or functional) provided that certain other functions (functionals) take values ​​from a given admissible set. If there are no conditions limiting changes in independent variables (functions) in the indicated sense, then we speak of an unconditional extremum.
Classic task on U. e. is the problem of determining the minimum of a function of several variables

Provided that certain other functions take the given values:

In this problem G, to whom the values ​​of the vector function must belong g=(g 1, ...,g m), included in additional conditions (2), there is a fixed point c=(c 1, ..., with t)in m-dimensional Euclidean space
If in (2) along with the equal sign, inequality signs are allowed

This then leads to the problem nonlinear programming(1), (3). In problem (1), (3), the set G of admissible values ​​of the vector function g is a certain curvilinear one belonging to the (n-m 1)-dimensional hypersurface defined by m 1 , m 1 conditions like equality (3). The boundaries of the specified curvilinear polyhedron are constructed taking into account p-m 1 inequalities included in (3).
A special case of problem (1), (3) on U.V. is the task linear programming, in which all the functions f and g i are linear in x l , ... , x p. In a linear programming problem, the set G of admissible values ​​of the vector function g, included in the conditions limiting the area of ​​change of variables x 1, .....x n , represents , belonging to the (n-t 1)-dimensional hyperplane defined by m 1 conditions of the type of equality in (3).
Similarly, most problems of optimization of functionals representing practical interest comes down to problems on U. e. (cm. Isoperimetric problem, Ring problem, Lagrange problem, Manner problem). The same as in mathematics. programming, the main problems of the calculus of variations and the theory of optimal control are problems in electronic systems.
When solving problems in electronic systems, especially when considering theoretical ones. questions related to problems in electronic systems, the use of indefinite Lagrange multipliers, allowing us to reduce the problem to U. e. to the problem on the unconditional and simplify the necessary optimality conditions. The use of Lagrange multipliers underlies most classical studies. methods for solving problems in electronic systems.

Lit.: Hedley J., Nonlinear and, trans. from English, M., 1967; Bliss G. A., Lectures on the calculus of variations, trans. from English, M., 1950; Pontryagin L. S. [et al.], Mathematical optimal processes, 2nd ed., M., 1969.
I. B. Vapnyarsky.

Mathematical encyclopedia. - M.: Soviet Encyclopedia. I. M. Vinogradov. 1977-1985.

See what "CONDITIONAL EXTREME" is in other dictionaries:

    Relative extremum, extremum of a function f (x1,..., xn + m) from n + m variables under the assumption that these variables are also subject to m connection equations (conditions): φk (x1,..., xn + m) = 0, 1≤ k ≤ m (*) (see Extremum).… …

    Let the set be open and the functions given. Let it be. These equations are called constraint equations (the terminology is borrowed from mechanics). Let a function be defined on G... Wikipedia

    - (from Latin extremum extreme) the value of a continuous function f (x), which is either a maximum or a minimum. More precisely: a function f (x) that is continuous at a point x0 has a maximum (minimum) at x0 if there is a neighborhood (x0 + δ, x0 δ) of this point,... ... Great Soviet Encyclopedia

    This term has other meanings, see Extremum (meanings). Extremum (lat. extremum extreme) in mathematics is the maximum or minimum value of a function on a given set. The point at which the extremum is reached... ... Wikipedia

    A function used in solving problems on the conditional extremum of functions of many variables and functionals. With the help of L. f. the necessary conditions for optimality in problems on a conditional extremum are written down. In this case, it is not necessary to express only variables... Mathematical Encyclopedia

    A mathematical discipline devoted to finding extreme (largest and smallest) values ​​of functionals of variables that depend on the choice of one or more functions. V. and. is a natural development of that chapter... ... Great Soviet Encyclopedia

    Variables, with the help of which the Lagrange function is constructed when studying problems on a conditional extremum. The use of linear methods and the Lagrange function allows us to obtain the necessary optimality conditions in problems involving a conditional extremum in a uniform way... Mathematical Encyclopedia

    Calculus of variations is a branch of functional analysis that studies variations of functionals. The most typical problem in the calculus of variations is to find a function on which a given functional achieves... ... Wikipedia

    A branch of mathematics devoted to the study of methods for finding extrema of functionals that depend on the choice of one or several functions under various kinds of restrictions (phase, differential, integral, etc.) imposed on these... ... Mathematical Encyclopedia

    Calculus of variations is a branch of mathematics that studies variations of functionals. The most typical problem in the calculus of variations is to find the function at which the functional reaches an extreme value. Methods... ...Wikipedia

Books

  • Lectures on control theory. Volume 2. Optimal control, V. Boss. The classical problems of optimal control theory are considered. The presentation begins with the basic concepts of optimization in finite-dimensional spaces: conditional and unconditional extremum,...

Sufficient condition for the extremum of a function of two variables

1. Let the function be continuously differentiable in some neighborhood of the point and have continuous partial derivatives of the second order (pure and mixed).

2. Let us denote by the second-order determinant

extremum variable lecture function

Theorem

If the point with coordinates is a stationary point for the function, then:

A) At it is a point of local extremum and, at a local maximum, it is a local minimum;

C) at the point is not a local extremum point;

C) if, maybe both.

Proof

Let us write the Taylor formula for the function, limiting ourselves to two terms:

Since, according to the conditions of the theorem, the point is stationary, the second-order partial derivatives are equal to zero, i.e. And. Then

Let's denote

Then the increment of the function will take the form:

Due to the continuity of second-order partial derivatives (pure and mixed), according to the conditions of the theorem at a point, we can write:

Where or; ,

1. Let and, i.e. or.

2. Multiply the increment of the function and divide by, we get:

3. Let's add the expression in curly brackets to the full square of the sum:

4. The expression in curly braces is non-negative, since

5. Therefore, if a means and, then and, therefore, according to definition, the point is a point of local minimum.

6. If a means and, then, according to the definition, the point with coordinates is a point of local maximum.

2. Consider the quadratic trinomial, its discriminant, .

3. If, then there are points such that the polynomial

4. We write the total increment of the function at a point in accordance with the expression obtained in I as:

5. Due to the continuity of second-order partial derivatives, according to the conditions of the theorem at a point, we can write that

Therefore, there is a neighborhood of a point such that, for any point, the quadratic trinomial is greater than zero:

6. Consider the neighborhood of a point.

Let's choose any value, so period. Assuming that in the formula for the increment of the function

What do we get:

7. Since, then.

8. Arguing similarly for the root, we find that in any -neighborhood of a point there is a point for which, therefore, in the neighborhood of the point does not preserve sign, therefore there is no extremum at the point.

Conditional extremum of a function of two variables

When finding extrema of a function of two variables, problems often arise related to the so-called conditional extremum. This concept can be explained using the example of a function of two variables.

Let a function and a line L be given on the 0xy plane. The task is to find a point P (x, y) on line L at which the value of the function is the largest or smallest compared to the values ​​of this function at points on line L located near point P. Such points P are called conditional extremum points functions on line L. Unlike the usual extremum point, the value of the function at the conditional extremum point is compared with the values ​​of the function not at all points of its neighborhood, but only at those that lie on the line L.

It is absolutely clear that the point of ordinary extremum (they also say unconditional extremum) is also the point of conditional extremum for any line passing through this point. The converse, of course, is not true: the conditional extremum point may not be the ordinary extremum point. Let us illustrate this with an example.

Example No. 1. The graph of the function is the upper hemisphere (Fig. 2).

Rice. 2.

This function has a maximum at the origin; it corresponds to the vertex M of the hemisphere. If line L is a straight line passing through points A and B (its equation), then it is geometrically clear that for the points of this line the greatest value of the function is achieved at the point lying in the middle between points A and B. This is the point of conditional extremum (maximum) functions on this line; it corresponds to point M 1 on the hemisphere, and from the figure it is clear that there can be no talk of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, we have to find the extreme values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the conditional extremum problem.

Definition 1. They say that where has at a point satisfying the equation a conditional or relative maximum (minimum): if for any point satisfying the equation the inequality

Definition 2. An equation of the form is called a constraint equation.

Theorem

If the functions and are continuously differentiable in the neighborhood of a point, and the partial derivative, and the point is a conditional extremum point of the function with respect to the constraint equation, then the second-order determinant is equal to zero:

Proof

1. Since, according to the conditions of the theorem, the partial derivative and the value of the function, then in a certain rectangle

implicit function defined

A complex function of two variables at a point will have a local extremum, therefore, or.

2. Indeed, according to the invariance property of the first order differential formula

3. The connection equation can be represented in this form, which means

4. Multiply equation (2) by, and (3) by and add them

Therefore, when

arbitrary. etc.

Consequence

The search for conditional extremum points of a function of two variables in practice is carried out by solving a system of equations

So, in the above example No. 1 from the connection equation we have. From here it is easy to check what reaches a maximum at. But then from the communication equation. We obtain point P, found geometrically.

Example No. 2. Find the conditional extremum points of the function relative to the coupling equation.

Let's find the partial derivatives of the given function and the coupling equation:

Let's create a second-order determinant:

Let's write a system of equations to find conditional extremum points:

This means that there are four points of the conditional extremum of the function with coordinates: .

Example No. 3. Find the extremum points of the function.

Equating the partial derivatives to zero: , we find one stationary point - the origin. Here,. Consequently, the point (0, 0) is not an extremum point. The equation is the equation of a hyperbolic paraboloid (Fig. 3) from the figure it can be seen that the point (0, 0) is not an extremum point.

Rice. 3.

The largest and smallest value of a function in a closed region

1. Let the function be defined and continuous in a bounded closed domain D.

2. Let the function have finite partial derivatives in this region, except for individual points of the region.

3. In accordance with Weierstrass’s theorem, in this region there is a point at which the function takes on the largest and smallest values.

4. If these points are internal points of the region D, then obviously they will have a maximum or a minimum.

5. In this case, the points of interest to us are among the suspicious points at the extremum.

6. However, the function can also take on the largest or smallest value at the boundary of region D.

7. In order to find the largest (smallest) value of a function in region D, you need to find all internal points suspicious for an extremum, calculate the value of the function in them, then compare with the value of the function at the boundary points of the region, and the largest of all found values ​​will be largest in closed region D.

8. The method of finding a local maximum or minimum was discussed earlier in section 1.2. and 1.3.

9. It remains to consider the method of finding the largest and smallest values ​​of the function at the boundary of the region.

10. In the case of a function of two variables, the area is usually limited by a curve or several curves.

11. Along such a curve (or several curves), the variables and either depend on one another, or both depend on one parameter.

12. Thus, at the boundary the function turns out to depend on one variable.

13. The method of finding the largest value of a function of one variable was discussed earlier.

14. Let the boundary of region D be given by parametric equations:

Then on this curve the function of two variables will be a complex function of the parameter: . For such a function, the largest and smallest values ​​are determined using the method for determining the largest and smallest values ​​for a function of one variable.

Necessary and sufficient conditions for extremum of functions of two variables. A point is called a minimum (maximum) point of a function if in a certain neighborhood of the point the function is defined and satisfies the inequality (respectively, the maximum and minimum points are called extremum points of the function.

A necessary condition for an extremum. If at an extremum point a function has first partial derivatives, then they vanish at this point. It follows that to find the extremum points of such a function, one must solve a system of equations. Points whose coordinates satisfy this system are called critical points of the function. Among them there may be maximum points, minimum points, and also points that are not extremum points.

Sufficient extremum conditions are used to identify extremum points from a set of critical points and are listed below.

Let the function have continuous second partial derivatives at the critical point. If at this point it is true

condition then it is a minimum point at and a maximum point at If at a critical point then it is not an extremum point. In this case, a more subtle study of the nature of the critical point is required, which in this case may or may not be an extremum point.

Extrema of functions of three variables. In the case of a function of three variables, the definitions of extremum points repeat verbatim the corresponding definitions for a function of two variables. We limit ourselves to presenting the procedure for studying a function for an extremum. When solving a system of equations, one should find the critical points of the function, and then at each of the critical points calculate the values

If all three quantities are positive, then the critical point in question is the minimum point; if then this critical point is a maximum point.

Conditional extremum of a function of two variables. A point is called a conditional minimum (maximum) point of a function provided that there is a neighborhood of the point at which the function is defined and in which (respectively) for all points whose coordinates satisfy the equation

To find conditional extremum points, use the Lagrange function

where the number is called the Lagrange multiplier. Solving a system of three equations

find the critical points of the Lagrange function (as well as the value of the auxiliary factor A). At these critical points there may be a conditional extremum. The above system provides only necessary conditions for an extremum, but not sufficient ones: it can be satisfied by the coordinates of points that are not points of a conditional extremum. However, based on the essence of the problem, it is often possible to establish the nature of the critical point.

Conditional extremum of a function of several variables. Let us consider a function of variables provided that they are related by the equations

Did you like the article? Share with friends: