Home Pulpitis Formulation of the problem. Find the maximum of the objective function using a graphical method

Formulation of the problem. Find the maximum of the objective function using a graphical method

Objective function- a real or integer function of several variables that is subject to optimization (minimization or maximization) in order to solve some optimization problem. Term used in mathematical programming, operations research, linear programming, theory statistical solutions and other areas of mathematics, primarily of an applied nature, although the goal of optimization may also be the solution of the mathematical problem itself. Besides objective function In an optimization problem, constraints can be specified for variables in the form of a system of equalities or inequalities. IN general case the arguments of the target function can be specified on arbitrary sets.

Examples

Smooth functions and systems of equations

The problem of solving any system of equations

( F 1 (x 1 , x 2 , … , x M) = 0 F 2 (x 1 , x 2 , … , x M) = 0 … F N (x 1 , x 2 , … , x M) = 0 ( \displaystyle \left\((\begin(matrix)F_(1)(x_(1),x_(2),\ldots ,x_(M))=0\\F_(2)(x_(1),x_ (2),\ldots ,x_(M))=0\\\ldots \\F_(N)(x_(1),x_(2),\ldots ,x_(M))=0\end(matrix) )\right.)

can be formulated as a problem of minimizing the objective function

S = ∑ j = 1 N F j 2 (x 1 , x 2 , … , x M) (1) (\displaystyle S=\sum _(j=1)^(N)F_(j)^(2)( x_(1),x_(2),\ldots ,x_(M))\qquad (1))

If the functions are smooth, then the minimization problem can be solved using gradient methods.

For any smooth objective function, the partial derivatives with respect to all variables can be equated to 0 (\displaystyle 0). The optimum of the objective function will be one of the solutions to such a system of equations. In the case of function (1) (\displaystyle (1)) this will be the system of equations of the method least squares(MNC). Every solution of the original system is a solution of the least squares system. If the original system is inconsistent, then the least squares system, which always has a solution, allows us to obtain an approximate solution of the original system. The number of equations in the least squares system coincides with the number of unknowns, which sometimes facilitates the solution of joint initial systems.

Linear programming

To others famous example The objective function is a linear function that arises in linear programming problems. In contrast to the quadratic objective function, optimization of a linear function is possible only if there are restrictions in the form of a system of linear equalities or inequalities.

Combinatorial optimization

A typical example of a combinatorial objective function is the objective function of the traveling salesman problem. This function is equal to the length of the Hamiltonian cycle on the graph. It is defined on the set of permutations of n − 1 (\displaystyle n-1) vertices of the graph and is determined by the matrix of graph edge lengths. The exact solution to such problems often comes down to enumerating options.

Chapter 1. Statement of the main linear programming problem

  1. Linear programming

Linear programming is a branch of mathematical programming that studies methods for solving extremal problems that are characterized by linear dependence between variables and linear test. Such problems find extensive applications in various fields of human activity. Systematic study of problems of this type began in 1939–1940. in the works of L.V. Kantorovich.

Mathematical problems of linear programming include studies of specific production and economic situations, which in one form or another are interpreted as problems about the optimal use of limited resources.

The range of problems solved using linear programming methods is quite wide. These are, for example:

    the problem of optimal use of resources in production planning;

    mixture problem (product composition planning);

    problem of finding the optimal combination various types products for storage in warehouses (inventory management or);

    transport tasks (analysis of enterprise location, movement of goods).

Linear programming is the most developed and widely used section of mathematical programming (in addition, this includes: integer, dynamic, nonlinear, parametric programming). This is explained as follows:

    mathematical models of a large number of economic problems are linear with respect to the required variables;

    This type of problem is currently the most studied. Designed for him special methods, with the help of which these problems are solved, and the corresponding computer programs;

    many linear programming problems, having been solved, have found wide application;

    Some problems, which in the original formulation are not linear, after a number of additional restrictions and assumptions can become linear or can be reduced to such a form that they can be solved by linear programming methods.

The economic and mathematical model of any linear programming problem includes: an objective function, optimal value which (maximum or minimum) needs to be found; restrictions in the form of a system linear equations or inequalities; requirement of non-negativity of variables.

IN general view the model is written as follows:

objective function

(1.1) with restrictions

(1.2) non-negativity requirements

(1.3) where x j– variables (unknown);

- coefficients of the linear programming problem.

The problem is to find the optimal value of function (1.1) subject to constraints (1.2) and (1.3).

The system of constraints (1.2) is called functional constraints of the problem, and constraints (1.3) are called direct ones.

A vector that satisfies constraints (1.2) and (1.3) is called an admissible solution (plan) of a linear programming problem. The plan under which function (1.1) reaches its maximum (minimum) value is called optimal.

1.2. Simplex method for solving linear programming problems

The simplex method was developed and first used to solve problems in 1947 by the American mathematician J. Danzig.

Two-dimensional linear programming problems are solved graphically. For the case N=3 we can consider three dimensional space and the objective function will reach its optimal value at one of the vertices of the polyhedron.

An admissible solution (admissible plan) of an LP problem given in standard form is an ordered set of numbers (x1, x2, ..., xn) satisfying the restrictions; it is a point in n-dimensional space.

The set of admissible solutions forms the region of admissible solutions (ADS) of the LP problem. ODR is a convex polyhedron (polygon).

In general, when the problem involves N-unknowns, we can say that the region of feasible solutions defined by the system of limiting conditions is represented by a convex polyhedron in n-dimensional space and the optimal value of the objective function is achieved at one or more vertices.

A basic solution is a solution in which all free variables are equal to zero.

A support solution is a basic non-negative solution. The support solution can be non-degenerate and degenerate. A reference solution is called non-degenerate if the number of its non-zero coordinates is equal to the rank of the system, otherwise it is degenerate.

An admissible solution at which the objective function reaches its extreme value is called optimal and is denoted .

It is very difficult to solve these problems graphically when the number of variables is more than 3. There is a universal way to solve linear programming problems, called the simplex method.

The simplex method is a universal method for solving LP problems, which is an iterative process that starts with one solution and, in search of the best option, moves along the corner points of the region of feasible solutions until it reaches the optimal value.

It can be used to solve any linear programming problem.

The simplex method is based on the idea of ​​sequential improvement of the resulting solution.

The geometric meaning of the simplex method is a sequential transition from one vertex of the constraint polyhedron to the neighboring one, in which the objective function takes the best (or at least not the worst) value until the optimal solution is found - the vertex where the optimal value is achieved function of the goal (if the problem has a final optimum).

Thus, having a system of constraints reduced to canonical form (all functional constraints have the form of equalities), they find any basic solution to this system, caring only about finding it as simply as possible. If the first basic solution found turns out to be feasible, then it is checked for optimality. If it is not optimal, then a transition is made to another, necessarily admissible, basic solution. The simplex method guarantees that with this new solution the objective function, if it does not reach the optimum, will approach it (or at least will not move away from it). The new feasible basic solution is dealt with in the same way until a solution is found that is optimal.

The process of applying the simplex method involves the implementation of its three main elements:

    a method for determining any initial feasible basic solution to a problem;

    the rule of transition to the best (more precisely, not worse) solution;

    criterion for checking the optimality of the solution found.

The simplex method includes a number of stages and can be formulated in the form of a clear algorithm (a clear instruction for performing sequential operations). This allows you to successfully program and implement it on a computer. Problems with a small number of variables and constraints can be solved manually using the simplex method.

6.1.Introduction

Optimization. Part 1

Optimization methods allow you to choose best option designs from all possible options. IN last years these methods have been given great attention, and as a result, a number of highly efficient algorithms were developed that make it possible to find the optimal design option using a computer. This chapter outlines the basics of optimization theory, examines the principles underlying the construction of algorithms for optimal solutions, describes the most well-known algorithms, and analyzes their advantages and disadvantages.

6.2.Fundamentals of optimization theory

The term “optimization” in the literature refers to a process or sequence of operations that allows one to obtain a refined solution. Although the ultimate goal of optimization is to find the best, or “optimal,” solution, one usually has to settle for improving known solutions rather than perfecting them. Therefore, optimization is understood rather as a desire for perfection, which may not be achieved.

Considering some arbitrary system described by m equations with n unknowns, we can distinguish three main types of problems. If m=n, the problem is called algebraic. This problem usually has one solution. If m>n, then the problem is overdetermined and, as a rule, has no solution. Finally, for m

Before we begin discussing optimization issues, we introduce a number of definitions.

Design parameters

This term denotes independent variable parameters that completely and unambiguously determine the design problem being solved. Design parameters are unknown quantities whose values ​​are calculated during the optimization process. Any basic or derived quantities that serve to quantitatively describe the system can serve as design parameters. Yes, it could be unknown values length, mass, time, temperature. The number of design parameters characterizes the degree of complexity of a given design problem. Usually the number of design parameters is denoted by n, and the design parameters themselves by x with the corresponding indices. Thus, n design parameters of this problem will be denoted by

X1, x2, x3,...,xn.

Objective function

It is an expression whose value the engineer strives to make maximum or minimum. The objective function allows you to quantitatively compare two alternative solutions. From a mathematical point of view, the objective function describes some (n+1)-dimensional surface. Its value is determined by the design parameters

M=M(x 1, x 2,...,x n).

Examples of objective functions often found in engineering practice are cost, weight, strength, dimensions, efficiency. If there is only one design parameter, then the objective function can be represented by a curve on the plane (Fig. 6.1). If there are two design parameters, then the objective function will be depicted as a surface in three-dimensional space (Fig. 6.2). With three or more design parameters, the surfaces specified by the objective function are called hypersurfaces and cannot be depicted.

marriage by ordinary means. The topological properties of the surface of the objective function play a big role in the optimization process, since the choice of the most efficient algorithm depends on them.

The objective function in some cases can take the most unexpected forms. For example, it is not always possible to express it in

Fig. 1. One-dimensional objective function.

Fig. 6.2. Two-dimensional objective function.

closed mathematical form, in other cases it can

represent a piecewise smooth function. Specifying an objective function may sometimes require a table of technical data (for example, a table of the state of water vapor) or may require an experiment. In some cases, design parameters take only integer values. An example would be the number of teeth in gear transmission or the number of bolts in the flange. Sometimes design parameters have only two meanings - yes or no. Qualitative parameters, such as the satisfaction experienced by the buyer who purchased the product, reliability, aesthetics, are difficult to take into account in the optimization process, since they are almost impossible to characterize quantitatively. However, no matter how the objective function is presented, it must be an unambiguous function of the design parameters.

A number of optimization problems require the introduction of more than one objective function. Sometimes one of them may turn out to be incompatible with the other. An example is aircraft design, where maximum strength, minimum weight and minimum cost are simultaneously required. In such cases, the designer must introduce a system of priorities and assign a certain dimensionless multiplier to each objective function. As a result, a “compromise function” appears, allowing the use of one composite objective function during the optimization process.

Finding minimum and maximum

Some optimization algorithms are designed to find the maximum, others - to find the minimum. However, regardless of the type of extremum problem being solved, you can use the same algorithm, since the minimization problem can easily be turned into a maximum search problem by reversing the sign of the objective function. This technique is illustrated in Fig. 6.3.

Design space

This is the name of the area defined by all n design parameters. The design space is not as large as it may seem, since it is usually limited by a number of

conditions related to the physical essence of the problem. The constraints may be so strong that the problem will not have any

Fig.6.3.Changing the sign of the objective function to the opposite

the maximum task turns into a minimum task.

satisfactory solution. Constraints are divided into two groups: constraints - equality and constraints - inequality.

Constraints - Equalities

Constraints - equalities - are the dependencies between design parameters that must be taken into account when finding a solution. They reflect the laws of nature, economics, law, prevailing tastes and availability necessary materials. The number of constraints - equalities can be any. They look like

C 1 (x 1 , x 2 ,...,x n)=0,

C 2 (x 1, x 2,...,x n)=0,

..................

C j (x 1 , x 2 ,...,x n)=0.

If any of these relationships can be resolved with respect to one of the design parameters, then this allows this parameter to be excluded from the optimization process. This reduces the number of dimensions of the design space and simplifies the solution of the problem.

Constraints - inequalities

This is a special type of constraint expressed by inequalities. In general, there can be as many of them as you like, and they all have the form

z 1 r 1 (x 1 , x 2 ,...,x n) Z 1

z 2 r 2 (x 1 , x 2 ,...,x n) Z 2

.......................

z k r k (x 1 , x 2 ,...,x n) Z k

It should be noted that very often, due to restrictions, the optimal value of the objective function is achieved not where its surface has a zero gradient. Often the best solution corresponds to one of the boundaries of the design space.

Local optimum

This is the name of the point in the design space at which the objective function has highest value compared to its values ​​at all other points in its immediate vicinity.

Fig. 6.4.An arbitrary objective function can have several

local optima.

In Fig. Figure 6.4 shows a one-dimensional objective function that has two local optima. Often the design space contains many local optima and care must be taken not to mistake the first one for the optimal solution to the problem.

Global optimum

The global optimum is the optimal solution for the entire design space. It is better than all other solutions corresponding to local optima, and it is what the designer is looking for. It is possible that there are several equal global optima located in different parts design space. How an optimization problem is posed is best illustrated with an example.

Example 6.1

Suppose you need to design a rectangular container with a volume of 1 m intended for transporting unpackaged fiber. It is desirable that as little material as possible be spent on the manufacture of such containers (assuming constant wall thickness, this means that the surface area should be minimal), since it will be cheaper. In order for the container to be conveniently picked up by a forklift, its width must be at least 1.5 m.

Let us formulate this problem in a form convenient for applying the optimization algorithm.

Design parameters: x 1, x 2, x 3.

The objective function (which needs to be minimized) is the area of ​​the lateral surface of the container:

A=2(x 1 x 2 +x 2 x 3 +x 1 x 3), m2.

Constraint - equality:

Volume = x 1 x 2 x 3 = 1m3.

Constraint - inequality:

Linear programming problems

Linear programming (LP) is one of the branches of mathematical programming - a discipline that studies extremal (optimization) problems and develops methods for solving them.

Optimization problem is a mathematical problem that consists of finding the optimal (i.e., maximum or minimum) value of the objective function, and the values ​​of the variables must belong to a certain range of acceptable values ​​(APV).

In general, the formulation of an extremal problem of mathematical programming consists in determining the largest or lowest value function called target function, under conditions (constraints), where and are given functions, and are given constant values. In this case, restrictions in the form of equalities and inequalities determine the set (area) of admissible solutions (ADS), and are called design parameters.

Depending on the type of functions, mathematical programming problems are divided into a number of classes (linear, nonlinear, convex, integer, stochastic, dynamic programming, etc.).

IN general view the LP problem has the following form:

, (5.1)

, , (5.2)

, , (5.3)

where , , are given constant values.

Function (5.1) is called the objective function; systems (5.2), (5.3) – a system of restrictions; condition (5.4) – the condition of non-negativity of design parameters.

The set of design parameters satisfying constraints (5.2), (5.3) and (5.4) is called acceptable solution or plan.

The optimal solution or optimal plan LP problem is called an admissible solution in which the objective function (5.1) takes the optimal (maximum or minimum) value.

Standard task LP is the problem of finding the maximum (minimum) value of the objective function (5.1) under condition (5.2) and (5.4), where , , i.e. those. restrictions only in the form of inequalities (5.2) and all design parameters satisfy the condition of non-negativity, and there are no conditions in the form of equalities:

,

, , (5.5)

.

Canonical (main) task LP is the problem of finding the maximum (minimum) value of the objective function (5.1) under condition (5.3) and (5.4), where , , i.e. those. restrictions only in the form of equalities (5.3) and all design parameters satisfy the condition of non-negativity, and there are no conditions in the form of inequalities:

,

.

The canonical LP problem can also be written in matrix and vector form.

The matrix form of the canonical LP problem has the following form:

Vector form of the canonical LP problem.

Let us construct on the plane a set of feasible solutions to the system of linear inequalities and geometrically find the minimum value of the objective function.

We build straight lines in the x 1 x 2 coordinate system

We find the half-planes defined by the system. Since the inequalities of the system are satisfied for any point in the corresponding half-plane, it is enough to check them for any one point. We use the point (0;0). Let's substitute its coordinates into the first inequality of the system. Because , then the inequality defines a half-plane that does not contain the point (0;0). We similarly define the remaining half-planes. We find the set of feasible solutions as the common part of the resulting half-planes - this is the shaded area.

We construct a vector and a zero level line perpendicular to it.


Moving straight line (5) in the direction of the vector and we see that the maximum point of the region will be at point A of the intersection of straight line (3) and straight line (2). We find the solution to the system of equations:

This means we got the point (13;11) and.

Moving straight line (5) in the direction of the vector and we see that the minimum point of the region will be at point B of the intersection of straight line (1) and straight line (4). We find the solution to the system of equations:

This means we got the point (6;6) and.

2. A furniture company produces combined cabinets and computer tables. Their production is limited by the availability of raw materials (high-quality boards, fittings) and the operating time of the machines processing them. Each cabinet requires 5 m2 of boards, for a table - 2 m2. Fittings cost $10 for one cabinet, and $8 for one table. The company can receive from its suppliers up to 600 m2 of boards per month and accessories worth $2,000. Each cabinet requires 7 hours of machine operation, and the table requires 3 hours. A total of 840 machine operating hours can be used per month.

How many combination cabinets and computer tables should a company produce per month to maximize profits if one cabinet brings in $100 in profit and each desk brings in $50?

  • 1. Compose mathematical model problem and solve it using the simplex method.
  • 2. Create a mathematical model of the dual problem, write down its solution based on the solution to the original one.
  • 3. Establish the degree of scarcity of the resources used and justify the profitability of the optimal plan.
  • 4. Explore the possibilities of further increasing production output depending on the use of each type of resource.
  • 5. Assess the feasibility of introducing a new type of product - bookshelves, if the manufacture of one shelf costs 1 m 2 of boards and accessories worth $5, and it is necessary to spend 0.25 hours of machine operation and the profit from the sale of one shelf is $20.
  • 1. Let's build a mathematical model for this problem:

Let us denote by x 1 the volume of production of cabinets, and x 2 the volume of production of tables. Let's create a system of restrictions and a goal function:

We solve the problem using the simplex method. Let's write it in canonical form:

Let's write the task data in the form of a table:

Table 1

Because Now all deltas are greater than zero, then a further increase in the value of the goal function f is impossible and we have obtained an optimal plan.


Introduction

The current stage of human development is distinguished by the fact that the age of energy is being replaced by the age of computer science. There is an intensive introduction of new technologies into all spheres of human activity. There is a real problem of transition to an information society, for which the development of education should become a priority. The structure of knowledge in society is also changing. Increasingly important for practical life acquire fundamental knowledge that contributes to the creative development of the individual. The constructiveness of the acquired knowledge and the ability to structure it in accordance with the goal are also important. Based on knowledge, new ones are formed informational resources society. The formation and acquisition of new knowledge should be based on a strict methodology of a systems approach, within which the model approach occupies a special place. The possibilities of the model approach are extremely diverse, both in terms of the formal models used and in the methods of implementing modeling methods. Physical modeling allows one to obtain reliable results for fairly simple systems.

Currently, it is impossible to name an area of ​​human activity in which modeling methods would not be used to one degree or another. This is especially true in the area of ​​management various systems, where the main processes are decision-making based on the information received.

1. Statement of the problem

minimum objective function

Solve the problem of finding the minimum of the objective function for the system of constraints specified by the solution polygon in accordance with option No. 16 of the task. The solution polygon is shown in Figure 1:

Figure 1 - Polygon of solutions to the problem

The system of constraints and the objective function of the problem are presented below:

It is necessary to solve the problem using the following methods:

Graphical method for solving LP problems;

Algebraic method for solving LP problems;

Simplex method for solving LP problems;

Method for finding an admissible solution to LP problems;

Solution of the dual LP problem;

Branch and bound method for solving integer LP problems;

Gomori method for solving integer LP problems;

Balazs method for solving Boolean LP problems.

Compare solution results different methods draw appropriate conclusions from the work.

2. Graphical solution to the linear programming problem

The graphical method for solving linear programming problems is used in cases where the number of unknowns does not exceed three. Convenient for qualitative research of the properties of solutions and is used in conjunction with other methods (algebraic, branch and bound, etc.). The idea of ​​the method is based on the graphical solution of a system of linear inequalities.

Rice. 2 Graphical solution of the LP problem

Minimum point

Equation of a line passing through two points A1 and A2:

AB: (0;1); (3;3)

VS: (3;3); (4;1)

CD: (4;1); (3;0)

EA: (1;0); (0;1)

CF: (0;1); (5;2)

with restrictions:

Solving a linear programming problem using the algebraic simplex method

The use of an algebraic method for solving a problem requires a generalization of the representation of the LP problem. The original system of restrictions, specified in the form of inequalities, is converted to a standard form of notation when the restrictions are specified in the form of equalities. Transformation of the constraint system to standard view includes the following steps:

Transform the inequalities so that there are variables and free terms on the left, and 0 on the right, i.e. to left side was greater than or equal to zero;

Introduce additional variables, the number of which is equal to the number of inequalities in the system of constraints;

By introducing additional restrictions on the non-negativity of the added variables, replace the inequality signs with strict equality signs.

When solving an LP problem using the algebraic method, a condition is added: the objective function must tend to a minimum. If this condition is not satisfied, it is necessary to transform the objective function accordingly (multiply by -1) and solve the minimization problem. After the solution is found, substitute the values ​​of the variables into the original function and calculate its value.

The solution to a problem using the algebraic method is considered optimal when the values ​​of all basic variables are non-negative, and the coefficients of the free variables in the objective function equation are also non-negative. If these conditions are not met, it is necessary to transform the system of inequalities, expressing some variables in terms of others (changing free and basic variables) to achieve the fulfillment of the above restrictions. The value of all free variables is considered equal to zero.

The algebraic method for solving linear programming problems is one of the most effective methods when solving small-scale problems manually because does not require a large number of arithmetic calculations. The machine implementation of this method is more complicated than, for example, for the simplex method, because The solution algorithm using the algebraic method is to some extent heuristic and the effectiveness of the solution largely depends on personal experience.

Free variables

St. lane - additional kit

The non-negativity conditions are met, therefore, the optimal solution has been found.

3. Solving a linear programming problem using a simplex table

Solution: Let us reduce the problem to a standard form for solution using a simplex table.

Let us reduce all equations of the system to the form:

We build a simplex table:

In the upper corner of each cell of the table we enter the coefficients from the system of equations;

We select the maximum positive element in row F, except that this will be the general column;

In order to find the general element, we build a relationship for all positive ones. 3/3; 9/1;- minimum ratio in line x3. Therefore - the general string and =3 - the general element.

We find =1/=1/3. We bring it into the lower corner of the cell where the general element is located;

In all empty lower corners of the general line we enter the product of the value in the upper corner of the cell by;

Select the upper corners of the general line;

In all lower corners of the general column we enter the product of the value in the upper corner by - and select the resulting values;

The remaining cells of the table are filled in as products of the corresponding selected elements;

Then we build a new table in which the designations of the cells of the elements of the general column and row are swapped (x2 and x3);

The values ​​that were previously in the lower corner are written into the upper corner of the former general row and column;

The sum of the values ​​of the upper and lower corners of these cells in the previous table is written in the upper corner of the remaining cells

4. Solving a linear programming problem by finding an admissible solution

Let a system of linear algebraic equations be given:

We can assume that everything is, otherwise we multiply the corresponding equation by -1.

We introduce auxiliary variables:

We also introduce an auxiliary function

We will minimize the system under restrictions (2) and conditions.

RULE FOR FINDING AN ALLOWABLE SOLUTION: To find an admissible solution to system (1), we minimize form (3) under restrictions (2), taking xj as free unknowns, and taking xj as the basis ones.

When solving a problem using the simplex method, two cases may arise:

min f=0, then all i must be equal to zero. And the resulting values ​​of xj will constitute an admissible solution to system (1).

min f>0, i.e. the original system does not have a feasible solution.

Source system:

The condition of the problem from the previous topic is used.

Let's introduce additional variables:

An admissible solution to the original problem has been found: x1 = 3, x2 = 3, F = -12. Based on the obtained feasible solution, we will find the optimal solution to the original problem using the simplex method. To do this, we will build a new simplex table from the table obtained above, removing the row and the row with the target function of the auxiliary problem:

Analyzing the constructed simplex table, we see that the optimal solution for the original problem has already been found (the elements in the row corresponding to the objective function are negative). Thus, the feasible solution found when solving the auxiliary problem coincides with the optimal solution to the original problem:

6. Dual linear programming problem

The original system of constraints and the objective function of the problem are shown in the figure below.

with restrictions:

Solution: Let’s bring the system of restrictions to a standard form:

The problem dual to this one will have the form:

The solution to the dual problem will be performed using a simple simplex method.

Let us transform the objective function so that the minimization problem is solved, and write the system of constraints in standard form for solving using the simplex method.

y6 = 1 - (-2 y1 + 2y2 +y3 + y4+ y5)

y7 = 5 - (-3y1 - y2 + y3 + y4)

Ф = 0 - (3y1 + 9y2 + 3y3 + y4) ??min

Let us construct an initial simplex table for solving the dual LP problem.

Second step of the simplex method

So, at the third step of the simplex method, an optimal solution to the minimization problem was found with the following results: y2 = -7 /8, y1 = -11/8, Ф = 12. In order to find the value of the objective function of the dual problem, we substitute the found values ​​of the basic and free variables into the maximization function:

Фmax = - Фmin = 3*(-11/8) + 9(-7/8) + 3*0 + 0 = -12

Since the value of the objective function of the direct and dual problems coincides, the solution to the direct problem is found and is equal to 12.

Fmin = Фmax = -12

7. Solving an integer linear programming problem using the branch-and-bound method

Let us transform the original problem in such a way that the integer condition is not satisfied when solved using conventional methods.

Initial polygon of solutions to an integer programming problem.

For the transformed polygon of solutions we construct new system restrictions.

Let us write down the system of restrictions in the form of equalities to be solved using the algebraic method.

As a result of the solution, the optimal plan for the problem was found: x1 = 9/4, x2 = 5/2, F = -41/4. This solution does not meet the integer condition set in the problem. Let's divide the original solution polygon into two areas, excluding area 3 from it

Modified problem solution polygon

Let's create new systems of restrictions for the resulting areas of the solution polygon. The left area is a quadrilateral (trapezoid). The system of restrictions for the left region of the solution polygon is presented below.

Restriction system for the left area

The right area represents point C.

The system of restrictions for the right decision region is presented below.

The new constraint systems represent two auxiliary problems that need to be solved independently of each other. Let's solve an integer programming problem for the left region of the solution polygon.

As a result of the solution, the optimal plan for the problem was found: x1 = 3, x2 = 3, F = -12. This plan satisfies the condition that the variables in the problem are integer and can be accepted as the optimal reference plan for the original integer linear programming problem. There is no point in solving for the right solution region. The figure below shows the progress of solving an integer linear programming problem in the form of a tree.

Progress of solving an integer linear programming problem using the Gomori method.

In many practical applications, an integer programming problem in which a system of linear inequalities and a linear form is given is of great interest

It is required to find an integer solution to system (1), which minimizes the objective function F, and all coefficients are integers.

One of the methods for solving the integer programming problem was proposed by Gomori. The idea of ​​the method is to use continuous linear programming methods, in particular, the simplex method.

1) Using the simplex method, the solution to problem (1), (2) is determined, for which the requirement for an integer solution is removed; if the solution turns out to be integer, then the desired solution to the integer problem will also be found;

2) Otherwise, if some coordinate is not an integer, the resulting solution to the problem is checked for the possibility of the existence of an integer solution (the presence of integer points in an admissible polyhedron):

if in any row with a fractional free term, all other coefficients turn out to be integers, then there are no integers or points in the admissible polyhedron and the integer programming problem has no solution;

Otherwise, an additional linear constraint is introduced, which cuts off a part of the admissible polyhedron that is unpromising for finding a solution to the integer programming problem;

3) To construct an additional linear constraint, select the lth row with a fractional free term and write the additional constraint

where and are respectively fractional parts of the coefficients and free

member. Let us introduce an auxiliary variable into constraint (3):

Let us determine the coefficients and included in constraint (4):

where and are the nearest integers from below for and respectively.

Gomori proved that a finite number of similar steps leads to a linear programming problem whose solution is integer and, therefore, the desired one.

Solution: Let’s bring the system of linear constraints and the goal function to the canonical form:

Let us determine the optimal solution to the system of linear constraints, temporarily discarding the integer condition. We use the simplex method for this. Below, sequentially in the tables, the original solution of the problem is presented, and the transformations of the original table are given in order to obtain the optimal solution to the problem:

Solving Boolean LP problems using the Balazs method.

Create your own version for an integer linear programming problem with Boolean variables, taking into account the following rules: the problem uses at least 5 variables, at least 4 constraints, the coefficients of the constraints and the objective function are chosen arbitrarily, but in such a way that the system of constraints is compatible. The task is to solve the LCLP with Boolean variables using the Balazs algorithm and determine the reduction in the complexity of calculations in relation to solving the problem using the exhaustive search method.

Execution of restrictions

F value

Filtering limitation:

Determination of computational effort reduction

The solution to the problem using the exhaustive search method is 6*25=192 calculated expressions. The solution to the problem using the Balazs method is 3*6+(25-3)=47 calculated expressions. The total reduction in the complexity of calculations in relation to solving the problem using the exhaustive search method is:

Conclusion

The process of designing information systems that implement new information technology is constantly being improved. The focus of systems engineers is increasingly on complex systems, making it difficult to use physical models and increasing the importance of mathematical models and machine simulation of systems. Machine simulation has become an effective tool for studying and designing complex systems. The relevance of mathematical models is continuously increasing due to their flexibility, adequacy to real processes, and low cost of implementation on the basis of modern PCs. More and more opportunities are provided to the user, i.e., a specialist in modeling systems using computer technology. The use of modeling is especially effective in the early stages of designing automated systems, when the cost of erroneous decisions is most significant.

Modern computing tools have made it possible to significantly increase the complexity of the models used when studying systems; it has become possible to build combined, analytical and simulation models that take into account the whole variety of factors that occur in real systems, i.e., the use of models that are more adequate to the phenomena under study.

Literature:

1. Lyashchenko I.N. Linear and nonlinear programming / I.N. Lyashchenko, E.A. Karagodova, N.V. Chernikova, N.Z. Shor. - K.: “Higher School”, 1975, 372 p.

2. Methodological instructions for completing a course project in the discipline “Applied Mathematics” for students of the specialty “Computer Systems and Networks” of full-time and part-time forms of study / Compiled by: I.A. Balakireva, A.V. Skatkov - Sevastopol: SevNTU Publishing House , 2003. - 15 p.

3. Guidelines for studying the discipline “Applied Mathematics”, section “Methods of global search and one-dimensional minimization” / Comp. A.V. Skatkov, I.A. Balakireva, L.A. Litvinova - Sevastopol: SevGTU Publishing House, 2000. - 31 p.

4. Guidelines for studying the discipline “Applied Mathematics” for students of the specialty “Computer Systems and Networks” Section “Solving integer linear programming problems” for full-time and part-time education / Compiled by: I.A. Balakireva, A.V. Skatkov - Sevastopol : Publishing House of SevNTU, 2000. - 13 p.

5. Akulich I.L. Mathematical programming in examples and problems:

6. Textbook allowance for economics students. specialist. universities.-M.: Higher. school, 1986.- 319 p., ill.

7. Andronov S.A. Optimal design methods: Text of lectures / SPbSUAP. St. Petersburg, 2001. 169 p.: ill.

Similar documents

    Algorithm for solving linear programming problems using the simplex method. Construction of a mathematical model of a linear programming problem. Solving a linear programming problem in Excel. Finding profit and optimal production plan.

    course work, added 03/21/2012

    Graphic problem solving. Drawing up a mathematical model. Determining the maximum value of the objective function. Solution by the simplex method with an artificial basis of the canonical linear programming problem. Checking the optimality of the solution.

    test, added 04/05/2016

    Theoretical basis of linear programming. Linear programming problems, solution methods. Analysis of the optimal solution. Solution of a single-index linear programming problem. Statement of the problem and data entry. Model construction and solution stages.

    course work, added 12/09/2008

    Construction of a mathematical model. Selection, justification and description of the method for solving a direct linear programming problem using the simplex method, using a simplex table. Compilation and solution of a dual problem. Sensitivity analysis of the model.

    course work, added 10/31/2014

    Construction of a mathematical model in order to obtain maximum profit for the enterprise, graphical solution of the problem. Solving the problem using the SOLVER add-on. Analysis of changes in resource reserves. Determination of the limits for changing the coefficients of the objective function.

    course work, added 12/17/2014

    Mathematical programming. Linear programming. Linear programming problems. Graphical method for solving linear programming problems. Economic formulation of the linear programming problem. Construction of a mathematical model.

    course work, added 10/13/2008

    Solving a linear programming problem by graphical method, checking it in MS Excel. Analysis of the internal structure of solving a problem in a program. Optimization of production plan. Solving the problem using the simplex method. Multichannel queuing system.

    test, added 05/02/2012

    Solving a linear programming problem using the simplex method: statement of the problem, construction of an economic and mathematical model. Solving the transport problem using the potential method: constructing the initial reference plan, determining its optimal value.

    test, added 04/11/2012

    Statement of the nonlinear programming problem. Determination of stationary points and their type. Construction of level lines, three-dimensional graph of the objective function and constraints. Graphic and analytical solution of the problem. User's manual and algorithm diagram.

    course work, added 12/17/2012

    Analysis of the solution to a linear programming problem. Simplex method using simplex tables. Modeling and solving LP problems on a computer. Economic interpretation of the optimal solution to the problem. Mathematical formulation of the transport problem.

Divide the third row by the key element equal to 5, we get the third row of the new table.

Basic columns correspond to unit columns.

Calculation of other table values:

“BP – Basic Plan”:

; ;

"x1": ; ;

"x5": ; .

The values ​​of the index string are non-negative, therefore we obtain the optimal solution: , ; .

Answer: the maximum profit from the sale of manufactured products, equal to 160/3 units, is ensured by the production of only products of the second type in the amount of 80/9 units.


Task No. 2

A nonlinear programming problem is given. Find the maximum and minimum of the objective function using the graphic-analytical method. Compose the Lagrange function and show that at the extremum points sufficient conditions for the minimum (maximum) are satisfied.

Because the last digit of the cipher is 8, then A=2; B=5.

Because the penultimate digit of the cipher is 1, then you should choose task No. 1.

Solution:

1) Let us draw the area defined by the system of inequalities.


This area is triangle ABC with vertex coordinates: A(0; 2); B(4; 6) and C(16/3; 14/3).

The levels of the objective function are circles with the center at the point (2; 5). The squares of the radii will be the values ​​of the objective function. Then the figure shows that the minimum value of the objective function is achieved at point H, the maximum - either at point A or at point C.

The value of the objective function at point A: ;

The value of the objective function at point C: ;

This means that the highest value of the function is achieved at point A(0; 2) and is equal to 13.

Let's find the coordinates of point H.

To do this, consider the system:

ó

ó

A line is tangent to a circle if the equation has a unique solution. A quadratic equation has a unique solution if the discriminant is 0.


Then ; ; - minimum value of the function.

2) Let’s compose the Lagrange function to find the minimum solution:

At x 1 =2.5; x 2 =4.5 we get:

ó

The system has a solution at , i.e. sufficient conditions for the extremum are satisfied.

Let's compose the Lagrange function to find the maximum solution:

Sufficient conditions for an extremum:

At x 1 =0; x 2 =2 we get:

ó ó

The system also has a solution, i.e. sufficient conditions for the extremum are satisfied.

Answer: the minimum of the objective function is achieved when ; ; the maximum of the objective function is achieved at ; .


Task No. 3

Two enterprises are allocated funds in the amount d units. When allocating the first enterprise for a year x units of funds it provides income k 1 x units, and when allocated to a second enterprise y units of funds, it provides income k 1 y units. The balance of funds at the end of the year for the first enterprise is equal to nx, and for the second my. How to distribute all the funds over 4 years so that the total income is greatest? Solve the problem using dynamic programming method.

i=8, k=1.

A=2200; k 1 =6; k 2 =1; n=0.2; m=0.5.

Solution:

The entire period of 4 years is divided into 4 stages, each of which is equal to one year. Let's number the stages starting from the first year. Let X k and Y k be the funds allocated respectively to enterprises A and B at the kth stage. Then the sum X k + Y k =а k is the total amount of funds used at the k – that stage and the remainder from the previous stage k – 1. at the first stage, all allocated funds are used and a 1 = 2200 units. the income that will be received at the k – that stage, with the allocation of X k and Y k units will be 6X k + 1Y k. let the maximum income received at the last stages starting from k – that stage be f k (a k) units. Let us write down the functional Bellman equation expressing the principle of optimality: whatever the initial state and the initial solution, the subsequent solution must be optimal with respect to the state obtained as a result of the initial state:

For each stage you need to select the value X k, and the value Y k=ak- Xk. Taking this into account, we will find the income at the kth stage:

The Bellman functional equation will be:

Let's look at all the stages, starting with the last one.

(since the maximum of the linear function is achieved at the end of the segment at x 4 = a 4);



New on the site

>

Most popular