Home Prosthetics and implantation The method of simple iterations in general form. Simple iteration method

The method of simple iterations in general form. Simple iteration method

Let's replace the original equation with an equivalent one and build iterations according to the rule . Thus, the simple iteration method is a one-step iterative process. In order to start this process, you need to know the initial approximation. Let us find out the conditions for the convergence of the method and the choice of the initial approximation.

Ticket#29

Seidel method

The Seidel method (sometimes called the Gauss-Seidel method) is a modification of the simple iteration method, which consists in the fact that when calculating the next approximation x (k+1) (see formulas (1.13), (1.14)) its already obtained components x 1 ( k+1) , ...,x i - 1 (k+1) are immediately used to calculate x i (k+1) .

In coordinate notation form, the Seidel method has the form:

X 1 (k+1) = c 11 x 1 (k) + c 12 x 2 (k) + ... + c 1n-1 x n-1 (k) + c 1n x n (k) + d 1
x 2 (k+1) = c 21 x 1 (k+1) + c 22 x 2 (k) + ... + c 2n-1 x n-1 (k) + c 2n x n (k) + d 2
...
x n (k+1) = c n1 x 1 (k+1) + c n2 x 2 (k+1) + ... + c nn-1 x n-1 (k+1) + c nn x n (k ) + dn
where x (0) is some initial approximation to the solution.

Thus, the i-th component of the (k+1)-th approximation is calculated by the formula

x i (k+1) = ∑ j=1 i-1 c ij x j (k+1) + ∑ n j=i c ij x j (k) + d i , i = 1, ..., n (1.20)

The condition for the end of the Seidel iterative process when the accuracy ε is achieved in a simplified form has the form:

|| x (k+1) - x (k) || ≤ ε.

Ticket#30

Passing method

To solve systems A x = b with a tridiagonal matrix, the sweep method is most often used, which is an adaptation of the Gauss method to this case.

Let's write the system of equations

d 1 x 1 + e 1 x 2 = b 1
c 2 x 1 + d 2 x 2 + e 2 x 3 = b 2
c 3 x 2 + d 3 x 3 + e 3 x 4 = b 3
... ... ...
c n-1 x n-2 + d n-1 x n-1 + e n-1 x n = b n-1
c n x n-1 + d n x n = b n

in matrix form: A x = b where

A=

Let us write down the formulas of the sweep method in the order of their application.

1. Direct stroke of the sweep method (calculation of auxiliary quantities):

a 2 = -e 1 / d 1 b 2 = b 1 / d 1 a i+1 = -e i / , i=2, ..., n-1 b i+1 = [-c i b i + b i ] / , i=2, ..., n-1 (1.9)

2. Reverse stroke sweep method (finding a solution):

x n = [-c n b n + b n ] / x i = a i+1 x i+1 + b i+1 , i = n-1, ..., 1

Ticket#31

Simple iteration method

The essence of the method simple iterations consists in moving from the equation

f(x)= 0 (*)

to the equivalent equation

x=φ(x). (**)

This transition can be made different ways, depending on the type f(x). For example, you can put

φ(x) = x+bf(x),(***)

Where b= const, and the roots original equation will not change.

If the initial approximation to the root is known x 0, then the new approximation

x 1=φx(0),

those. general scheme of the iterative process:

x k+1=φ(x k).(****)

The simplest criterion for ending the process

|x k +1 -x k |<ε.

Convergence criterion simple iteration method:

if near the root | φ/(x)| < 1, то итерации сходятся. Если указанное условие справедливо для любого x, then the iterations converge for any initial approximation.

Let's explore the choice of constant b from the point of view of ensuring maximum convergence speed. In accordance with the convergence criterion, the highest speed of convergence is provided when |φ / (x)| = 0. At the same time, based on (***), b = –1/f / (x), and the iteration formula (****) goes into x i =x i-1 -f(x i-1)/f/ (x i-1).- those. into the formula of Newton's method. Thus, Newton's method is a special case of the simple iteration method, providing the highest speed of convergence of all possible options for choosing a function φ(x).


Ticket#32

Newton's method

The main idea of ​​the method is as follows: an initial approximation is specified near the hypothetical root, after which a tangent to the function under study is constructed at the approximation point, for which the intersection with the abscissa axis is found. This point is taken as the next approximation. And so on until the required accuracy is achieved.

Let be a real-valued function defined on an interval and differentiable on it. Then the formula for iterative approximation calculus can be derived as follows:

where α is the angle of inclination of the tangent at the point.

Therefore, the required expression for has the form:

Ticket#33

Golden ratio method
The golden ratio method allows you to eliminate intervals by calculating only one function value at each iteration. As a result of the two considered values ​​of the function, the interval is determined that should be used in the future. This interval will contain one of the previous points and the next point placed symmetrically to it. The point divides the interval into two parts so that the ratio of the whole to the larger part is equal to the ratio of the larger part to the smaller, i.e. equal to the so-called “golden ratio”.

Dividing the interval into unequal parts allows you to find an even more effective method. Let us calculate the function at the ends of the segment [ a,b] and put a=x 1 , b=x 2. Let us also calculate the function at two interior points x 3 , x 4 . Let's compare all four values ​​of the function and choose the smallest among them. Let, for example, the smallest turn out to be f(x 3). Obviously, the minimum must be in one of the segments adjacent to it. Therefore the segment [ x 4 ,b] can be discarded and leave the segment .

The first step has been taken. On the segment, you again need to select two internal points, calculate the function values ​​​​at them and at the ends and take the next step. But at the previous step of calculations, we already found the function at the ends of the new segment and at one of its internal points x 4 . Therefore, it is enough to select one more point inside x 5 determine the value of the function in it and make the necessary comparisons. This quadruples the amount of computation required per process step. What is the best way to place points? Each time the remaining segment is divided into three parts and then one of the outer segments is discarded.
Let us denote the initial uncertainty interval by D.

Since in the general case any of the segments can be discarded X 1, X 3 or X 4, X 2 then select the points X 3 And X 4 so that the lengths of these segments are the same:

x 3 -x 1 =x 4 -x 2.

After discarding, we get a new length uncertainty interval D′.
Let us denote the relation D/D′ letter φ:

that is, let us set , where is the next uncertainty interval. But

equal in length to the segment discarded at the previous stage, that is

Therefore we get:

.
This leads to the equation or, equivalently
.

The positive root of this equation gives

.

Ticket#34

interpolation of functions, i.e. Using a given function, constructing another (usually simpler) function whose values ​​coincide with the values ​​of the given function at a certain number of points. Moreover, interpolation has both practical and theoretical significance.

The simple iteration method, also called the successive approximation method, is a mathematical algorithm for finding the value of an unknown quantity by gradually refining it. The essence of this method is that, as the name suggests, gradually expressing subsequent ones from the initial approximation, more and more refined results are obtained. This method is used to find the value of a variable in a given function, as well as when solving systems of equations, both linear and nonlinear.

Let us consider how this method is implemented when solving SLAEs. The simple iteration method has the following algorithm:

1. Checking the fulfillment of the convergence condition in the original matrix. Convergence theorem: if the original matrix of the system has diagonal dominance (i.e., in each row, the elements of the main diagonal must be greater in absolute value than the sum of the elements of the secondary diagonals in absolute value), then the simple iteration method is convergent.

2. The matrix of the original system does not always have a diagonal predominance. In such cases, the system can be converted. Equations that satisfy the convergence condition are left untouched, and linear combinations are made with those that do not, i.e. multiply, subtract, add equations to each other until the desired result is obtained.

If in the resulting system there are inconvenient coefficients on the main diagonal, then terms of the form with i * x i are added to both sides of such an equation, the signs of which must coincide with the signs of the diagonal elements.

3. Transformation of the resulting system to normal form:

x - =β - +α*x -

This can be done in many ways, for example, like this: from the first equation, express x 1 in terms of other unknowns, from the second - x 2, from the third - x 3, etc. In this case we use the formulas:

α ij = -(a ij / a ii)

i = b i /a ii
You should again make sure that the resulting system of normal form meets the convergence condition:

∑ (j=1) |α ij |≤ 1, while i= 1,2,...n

4. We begin to apply, in fact, the method of successive approximations itself.

x (0) is the initial approximation, we will express x (1) through it, then we will express x (2) through x (1). The general formula in matrix form looks like this:

x (n) = β - +α*x (n-1)

We calculate until we achieve the required accuracy:

max |x i (k)-x i (k+1) ≤ ε

So, let's put the simple iteration method into practice. Example:
Solve SLAE:

4.5x1-1.7x2+3.5x3=2
3.1x1+2.3x2-1.1x3=1
1.8x1+2.5x2+4.7x3=4 with accuracy ε=10 -3

Let's see whether diagonal elements predominate in modulus.

We see that only the third equation satisfies the convergence condition. Let's transform the first and second, and add the second to the first equation:

7.6x1+0.6x2+2.4x3=3

From the third we subtract the first:

2.7x1+4.2x2+1.2x3=2

We converted the original system into an equivalent one:

7.6x1+0.6x2+2.4x3=3
-2.7x1+4.2x2+1.2x3=2
1.8x1+2.5x2+4.7x3=4

Now let's bring the system to its normal form:

x1=0.3947-0.0789x2-0.3158x3
x2=0.4762+0.6429x1-0.2857x3
x3= 0.8511-0.383x1-0.5319x2

We check the convergence of the iterative process:

0.0789+0.3158=0,3947 ≤ 1
0.6429+0.2857=0.9286 ≤ 1
0.383+ 0.5319= 0.9149 ≤ 1, i.e. the condition is met.

0,3947
Initial guess x(0) = 0.4762
0,8511

Substituting these values ​​into the normal form equation, we obtain the following values:

0,08835
x(1) = 0.486793
0,446639

Substituting new values, we get:

0,215243
x(2) = 0.405396
0,558336

We continue calculations until we approach values ​​that satisfy the given condition.

x (7) = 0.441091

Let's check the correctness of the results obtained:

4,5*0,1880 -1.7*0,441+3.5*0,544=2,0003
3.1*0.1880+2.3*0.441-1.1x*0.544=0.9987
1.8*0,1880+2.5*0,441+4.7*0,544=3,9977

The results obtained by substituting the found values ​​into the original equations fully satisfy the conditions of the equation.

As we can see, the simple iteration method gives fairly accurate results, but to solve this equation we had to spend a lot of time and do cumbersome calculations.

Let a system of n algebraic equations with n unknowns be given:

Algorithm for the simple iteration method:

Note that here and henceforth the lower index denotes the corresponding component of the vector of unknowns, and the upper index the iteration (approximation) number.

Then a cyclic mathematical process is formed, each cycle of which represents one iteration. As a result of each iteration, a new value of the vector of unknowns is obtained. To organize the iterative process, we write system (1) in the reduced form. In this case, the terms on the main diagonal are normalized and remain to the left of the equal sign, and the rest are transferred to the right side. Reduced system of equations has the form:


notice, that will never be achieved, but with each subsequent iteration the vector of unknowns gets closer to the exact solution.

12. The basic iteration formula used in the simple iteration method to solve a nonlinear equation:

13. Criterion for stopping the iterative process in the simple iteration method for solving a nonlinear equation:

The iterative process ends if for each i-th component of the vector of unknowns the condition for achieving accuracy is met.
notice, that exact solution in the simple iteration method will never be achieved, however, with each subsequent iteration the vector of unknowns gets closer and closer to the exact solution

14. Criterion for choosing the auxiliary function F(x) for the iterative segment of the interval:

When taking a test in mathematics on solving the simple iteration method, the convergence condition must first be checked. For the method to converge, it is necessary and sufficient that in matrix A the absolute values ​​of all diagonal elements are greater than the sum of the moduli of all other elements in the corresponding row:



Disadvantage of iterative methods This is a rather strict convergence condition, which is not satisfied for all systems of equations.

If the convergence condition is met, then at the next stage it is necessary to specify an initial approximation of the vector of unknowns, which is usually the zero vector:

15. The Gauss method, used to solve systems of linear equations, provides:

The method is based on converting a matrix to a triangular form. This is achieved by sequentially eliminating unknowns from the system equations.

The simple iteration method is based on replacing the original equation with an equivalent equation:

Let the initial approximation to the root be known x = x 0. Substituting it into the right side of equation (2.7), we obtain a new approximation , then in a similar way we get etc.:

. (2.8)


Not under all conditions the iterative process converges to the root of the equation X. Let's take a closer look at this process. Figure 2.6 shows a graphical interpretation of a one-way convergent and divergent process. Figure 2.7 shows two-way convergent and divergent processes. A divergent process is characterized by a rapid increase in the values ​​of the argument and function and the abnormal termination of the corresponding program.


With a two-way process, cycling is possible, that is, endless repetition of the same function and argument values. Looping separates a divergent process from a convergent one.

It is clear from the graphs that for both one-sided and two-sided processes, convergence to the root is determined by the slope of the curve near the root. The smaller the slope, the better the convergence. As is known, the tangent of the slope of a curve is equal to the derivative of the curve at a given point.

Therefore, the smaller the number near the root, the faster the process converges.

In order for the iteration process to be convergent, the following inequality must be satisfied in the neighborhood of the root:

The transition from equation (2.1) to equation (2.7) can be carried out in various ways depending on the type of function f(x). In such a transition, it is necessary to construct the function so that the convergence condition (2.9) is satisfied.

Let's consider one of the general algorithms for transition from equation (2.1) to equation (2.7).

Let's multiply the left and right sides of equation (2.1) by an arbitrary constant b and add the unknown to both parts X. In this case, the roots of the original equation will not change:

Let us introduce the notation and let us move from relation (2.10) to equation (2.8).


Arbitrary choice of constant b will ensure the fulfillment of the convergence condition (2.9). The criterion for ending the iterative process will be condition (2.2). Figure 2.8 shows a graphical interpretation of the method of simple iterations using the described method of representation (the scales along the X and Y axes are different).

If a function is chosen in the form , then the derivative of this function will be . The highest speed of convergence will be at , then and the iteration formula (2.11) goes into Newton's formula. Thus, Newton's method has the highest degree of convergence of all iterative processes.

The software implementation of the simple iteration method is made in the form of a subroutine procedure Iteras(PROGRAM 2.1).


The entire procedure practically consists of one Repeat ... Until cycle, implementing formula (2.11) taking into account the condition for stopping the iterative process (formula (2.2)).

The procedure has built-in loop protection by counting the number of loops using the Niter variable. In practical classes, you need to make sure by running the program how the choice of coefficient affects b and initial approximation in the process of searching for the root. When changing the coefficient b the nature of the iteration process for the function under study changes. It first becomes two-sided, and then loops (Fig. 2.9). Axis scales X And Y are different. An even larger value of modulus b leads to a divergent process.

Comparison of methods for approximate solution of equations

The comparison of the methods described above for the numerical solution of equations was carried out using a program that allows you to observe the process of finding the root in graphical form on the PC screen. The procedures included in this program and implementing the compared methods are given below (PROGRAM 2.1).

Rice. 2.3-2.5, 2.8, 2.9 are copies of the PC screen at the end of the iteration process.

In all cases, the quadratic equation x 2 -x-6 = 0 was taken as the function under study, having an analytical solution x 1 = -2 and x 2 = 3. The error and initial approximations were assumed equal for all methods. Root search results x= 3, presented in the figures, are as follows. The dichotomy method converges the slowest - 22 iterations, the fastest is the simple iteration method with b = -0.2 - 5 iterations. There is no contradiction here with the statement that Newton's method is the fastest.

Derivative of the function under study at the point X= 3 is equal to -0.2, that is, the calculation in this case was carried out practically by Newton’s method with the value of the derivative at the point of the root of the equation. When changing the coefficient b the rate of convergence drops and the gradually convergent process first goes in cycles and then becomes divergent.

By analogy with (2.1), system (5.1) can be represented in the following equivalent form:

where g(x) is an iterative vector function of the vector argument. Systems of nonlinear equations often arise directly in the form (5.2) (for example, in numerical schemes for differential equations); in this case, no additional effort is required to transform equations (5.1) into system (5.2). If we continue the analogy with the simple iteration method for one equation, then the iteration process based on equation (5.2) can be organized as follows:

  • 1) some initial vector x ((,) e 5 o (x 0, A)(it is assumed that x* e 5„(x 0, A));
  • 2) subsequent approximations are calculated using the formula

then the iteration process is completed and

As before, we need to find out under what conditions

Let's discuss this issue by performing a simple analysis. First we introduce the error of the ith approximation as e(^ = x(i) - x*. Then we can write

Let us substitute these expressions into (5.3) and expand g(x* + e (/i)) in powers e(k> in the neighborhood of x* as a function of the vector argument (assuming that all partial derivatives of the function g(x) are continuous). Taking into account also that x* = g(x*), we get

or in matrix form

B = (bnm)= I (x*)1 - iteration matrix.

If the error rate ||e®|| is small enough, then the second term on the right side of expression (5.4) can be neglected, and then it coincides with expression (2.16). Consequently, the condition for the convergence of the iterative process (5.3) near the exact solution is described by Theorem 3.1.

Convergence of the simple iteration method. Necessary and sufficient condition for the convergence of the iterative process (5.3):

and a sufficient condition:

These conditions are of theoretical rather than practical significance, since we do not know x'. By analogy with (1.11), we obtain a condition that can be useful. Let x* e 5 o (x 0, A) and the Jacobian matrix for the function g(x)


exists for all x e S n (x 0 , a) (note that C(x*) = B). If the elements of the matrix C(x) satisfy the inequality

for all x e 5„(x 0, A), then the sufficient condition (5.5) is also satisfied for any matrix norm.

Example 5.1 (simple iteration method) Consider the following system equations:

One possibility to represent this system in equivalent form (5.2) is to express X from the first equation and x 2 from the second equation:

Then the iteration scheme has the form

The exact solution is x* e 5„((2, 2), 1). Let's choose the initial vector x (0) = (2,2) and ? p = CT 5. The calculation results are presented in table. 5.1.

Table 5.1

||X - X (i_1 > | 2 / X (A) 2

  • 1.50000
  • 1.73205
  • 1.69258
  • 1.34646
  • 1.71914
  • 1.40036
  • 1.71642
  • 1.39483
  • 1.71669
  • 1.39536
  • 1.71667
  • 1.39532

These results show that the convergence is quite slow. In order to obtain a quantitative characteristic of convergence, we carry out a simple analysis, considering x (1/) to be an exact solution. The Jacobian matrix C(x) for our iterative function has the form

then matrix B is approximately estimated as

It is easy to check that neither condition (5.5) nor condition (5.6) are satisfied, but convergence takes place, since 5(B) ~ 0.8.

It is often possible to speed up the convergence of the simple iteration method by slightly changing the calculation process. The idea of ​​this modification is very simple: to calculate P th vector components x (A+1) can be used not only (t = n,..., N), but also the already calculated components of the next approximation vector x k^ (/= 1,P - 1). Thus, the modified simple iteration method can be represented as the following iteration scheme:


If the approximations generated by the iterative process (5.3) converge, then the iterative process (5.8) tends to converge faster due to more complete use of information.

Example 5.2 (modified simple iteration method) The modified simple iteration for system (5.7) is represented as

As before, we choose the initial vector x (0) = (2, 2) and g r = = 10 -5. The calculation results are presented in table. 5.2.

Table 5.2

  • 1.50000
  • 1.11803
  • 1.72076
  • 1.40036
  • 1.71671
  • 1.39538
  • 1.71667
  • 1.39533

I The big change in the order of calculations led to a halving of the number of iterations, and therefore a halving of the number of operations.



New on the site

>

Most popular