Home Orthopedics Distribution law of a discrete random variable. Polygon (polygon) distribution

Distribution law of a discrete random variable. Polygon (polygon) distribution

  • 2.1. Relative frequency. Relative frequency stability
  • 2.2. Limitations of the classical definition of probability. Statistical probability
  • 2.3. Geometric probabilities
  • 2.4. Probability addition theorem
  • 2.5. Complete group of events
  • 2.6. Opposite events
  • 2.7. The principle of practical impossibility of unlikely events
  • 2.8. Producing events. Conditional probability
  • 2.9. Probability multiplication theorem
  • 2.10. Independent events. Multiplication theorem for independent events
  • 2.10. Probability of at least one event occurring
  • Lecture No. 3 Corollaries of addition and multiplication theorems
  • 3.1. Theorem for adding probabilities of joint events
  • 3.2. Total Probability Formula
  • 3.3. Probability of hypotheses. Bayes formulas
  • 4. Repetition of tests
  • 4.1. Bernoulli's formula
  • 4.2. Limit theorems in Bernoulli's scheme
  • 4.3. Local and integral theorems of Moivre-Laplace
  • 4.3. Probability of relative frequency deviation from constant probability in independent trials
  • 5. Random variables
  • 5.1. The concept of a random variable. Distribution law of a random variable
  • 5.2. Distribution law of a discrete random variable. Distribution polygon
  • 5.3. Binomial distribution
  • 5.4. Poisson distribution
  • 5.5. Geometric distribution
  • 5.6. Hypergeometric distribution
  • 6. Mathematical expectation of a discrete random variable
  • 6.1. Numerical characteristics of discrete random variables
  • 6.2. Expectation of a discrete random variable
  • 6.3. Probabilistic meaning of mathematical expectation
  • 6.4. Properties of mathematical expectation
  • 6.5. Mathematical expectation of the number of occurrences of an event in independent trials
  • 7. Dispersion of a discrete random variable
  • 7.1. The feasibility of introducing a numerical characteristic of the scattering of a random variable
  • 7.2. Deviation of a random variable from its mathematical expectation
  • 7.3. Variance of a discrete random variable
  • 7.4. Formula for calculating variance
  • 7.5. Dispersion properties
  • 7.6. Variance of the number of occurrences of an event in independent trials
  • 7.7. Standard deviation
  • 7.8. Standard deviation of the sum of mutually independent random variables
  • 7.9. Identically distributed mutually independent random variables
  • 7.10. Initial and central theoretical points
  • 8. Law of Large Numbers
  • 8.1. Preliminary remarks
  • 8.2. Chebyshev's inequality
  • 8.3. Chebyshev's theorem
  • 8.4. The essence of Chebyshev's theorem
  • 8.5. The significance of Chebyshev's theorem for practice
  • 8.6. Bernoulli's theorem
  • Probability distribution function of a random variable
  • 9.1. Definition of the distribution function
  • 9.2. Properties of the distribution function
  • 9.3. Distribution function graph
  • 10. Probability density of a continuous random variable
  • 10.1. Determination of distribution density
  • 10.2. Probability of a continuous random variable falling into a given interval
  • 10.3. Law of uniform probability distribution
  • 11. Normal distribution
  • 11.1. Numerical characteristics of continuous random variables
  • 11.2. Normal distribution
  • 11.3. Normal curve
  • 11.4. Influence of normal distribution parameters on the shape of the normal curve
  • 11.5. Probability of falling into a given interval of a normal random variable
  • 11.6. Calculating the probability of a given deviation
  • 11.7. Three sigma rule
  • 11.8. The concept of Lyapunov's theorem. Statement of the central limit theorem
  • 11.9. Estimation of the deviation of the theoretical distribution from the normal one. Skewness and kurtosis
  • 11.10. Function of one random argument and its distribution
  • 11.11. Mathematical expectation of a function of one random argument
  • 11.12. Function of two random arguments. Distribution of the sum of independent terms. Stability of normal distribution
  • 11.13. Chi square distribution
  • 11.14. Student distribution
  • 11.15. Fischer–Snedecor f distribution
  • 12. Exponential distribution
  • 12.1. Definition of exponential distribution
  • 12.2. Probability of falling into a given interval of an exponentially distributed random variable
  • § 3. Numerical characteristics of the exponential distribution
  • 12.4. Reliability function
  • 12.5. Exponential reliability law
  • 12.6. Characteristic property of the exponential reliability law
  • 5.2. Distribution law of a discrete random variable. Distribution polygon

    At first glance, it may seem that to define a discrete random variable it is enough to list all its possible values. In reality this is not so: random variables can have the same lists possible values, and their probabilities are different. Therefore, to specify a discrete random variable, it is not enough to list all its possible values; you also need to indicate their probabilities.

    Distribution law of a discrete random variable call the correspondence between possible values ​​and their probabilities; it can be specified tabularly, analytically (in the form of a formula) and graphically.

    Definition. Any rule (table, function, graph) that allows you to find the probabilities of arbitrary events AS (S– -algebra of events in space ), in particular, indicating the probabilities of individual values ​​of a random variable or a set of these values, is called random variable distribution law(or simply: distribution). About s.v. they say that “it obeys a given law of distribution.”

    Let X– d.s.v., which takes values X 1 , X 2 , …, x n,... (the set of these values ​​is finite or countable) with some probability p i, Where i = 1,2,…, n,… Distribution law d.s.v. convenient to set using the formula p i = P{X = x i)Where i = 1,2,…, n,..., which determines the probability that as a result of the experiment r.v. X will take the value x i. For d.s.v. X the distribution law can be given in the form distribution tables:

    x n

    r n

    When specifying the law of distribution of a discrete random variable in a table, the first row of the table contains possible values, and the second – their probabilities. such a table is called near distribution.

    Taking into account that in one trial a random variable takes one and only one possible value, we conclude that the events X = x 1 , X = x 2 , ..., X = x n form a complete group; therefore, the sum of the probabilities of these events, i.e. the sum of the probabilities of the second row of the table is equal to one, that is .

    If the set of possible values X infinitely (countably), then the series r 1 + r 2 + ... converges and its sum is equal to one.

    Example. There are 100 tickets issued for the cash lottery. One win of 50 rubles is drawn. and ten winnings of 1 rub. Find the distribution law of a random variable X– the cost of possible winnings for the owner of one lottery ticket.

    Solution. Let's write the possible values X: X 1 = 50, X 2 = 1, X 3 = 0. The probabilities of these possible values ​​are: r 1 = 0,01, r 2 = 0,01, r 3 = 1 – (r 1 + r 2)=0,89.

    Let us write the required distribution law:

    Control: 0.01 + 0.1 + 0.89 =1.

    Example. There are 8 balls in the urn, 5 of which are white, the rest are black. 3 balls are drawn at random from it. Find the law of distribution of the number of white balls in the sample.

    Solution. Possible values ​​of r.v. X– there are numbers of white balls in the sample X 1 = 0, X 2 = 1, X 3 = 2, X 4 = 3. Their probabilities will be accordingly

    ;
    ;
    .

    Let us write the distribution law in the form of a table.

    Control:
    .

    Distribution law d.s.v. can be specified graphically if the possible values ​​of r.v. are plotted on the abscissa axis, and the probabilities of these values ​​are plotted on the ordinate axis. a broken line connecting points in succession ( X 1 , r 1), (X 2 , r 2),... called polygon(or polygon) distribution(see Fig. 5.1).

    Rice. 5.1. Distribution polygon

    Now you can give more precise definition d.s.v.

    Definition. Random variable X is discrete, if there is a finite or countable set of numbers X 1 , X 2 , ... such that P{X = x i } = p i > 0 (i= 1,2,...) and p 1 + p 2 + r 3 +… = 1.

    Let us define mathematical operations on discrete r.v.

    Definition.Amount (difference, work) d.s.v. X, taking values x i with probabilities p i = P{X = x i }, i = 1, 2, …, n, and d.s.v. Y, taking values y j with probabilities p j = P{Y = y j }, j = 1, 2, …, m, is called d.s.v. Z = X + Y (Z = XY, Z = XY), taking values z ij = x i + y j (z ij = x i y j , z ij = x i y j) with probabilities p ij = P{X = x i , Y = y j) for all specified values i And j. If some amounts coincide x i + y j (differences x i y j, works x i y j) the corresponding probabilities are added.

    Definition.Work d.s.v. on number s called d.s.v. cX, taking values Withx i with probabilities p i = P{X = x i }.

    Definition. Two d.s.v. X And Y are called independent, if events ( X = x i } = A i And ( Y = y j } = B j independent for any i = 1, 2, …, n, j = 1, 2, …, m, that is

    Otherwise r.v. called dependent. Several r.v. are called mutually independent if the distribution law of any of them does not depend on what possible values ​​the other quantities took.

    Let's consider several of the most commonly used distribution laws.

    In the section of the course devoted to the basic concepts of probability theory, we have already introduced the extremely important concept of a random variable. Here we will give further development this concept and indicate the ways in which random variables can be described and characterized.

    As already mentioned, a random variable is a quantity that, as a result of experiment, can take on one or another value, it is not known in advance which one. We also agreed to distinguish between random variables of continuous (discrete) and continuous type. Possible values ​​of discontinuous quantities can be listed in advance. Possible values ​​of continuous quantities cannot be listed in advance and continuously fill a certain gap.

    Examples of discontinuous random variables:

    1) the number of appearances of the coat of arms during three coin tosses (possible values ​​0, 1, 2, 3);

    2) frequency of appearance of the coat of arms in the same experiment (possible values);

    3) the number of failed elements in a device consisting of five elements (possible values ​​are 0, 1, 2, 3, 4, 5);

    4) the number of hits on the aircraft sufficient to disable it (possible values ​​1, 2, 3, ..., n, ...);

    5) the number of aircraft shot down in air combat (possible values ​​0, 1, 2, ..., N, where is the total number of aircraft participating in the battle).

    Examples of continuous random variables:

    1) abscissa (ordinate) of the point of impact when fired;

    2) the distance from the point of impact to the center of the target;

    3) height meter error;

    4) failure-free operation time of the radio tube.

    Let us agree in what follows to denote random variables by capital letters, and their possible values ​​by corresponding small letters. For example, – the number of hits with three shots; possible values: .

    Let's consider a discontinuous random variable with possible values ​​. Each of these values ​​is possible, but not certain, and the value X can take each of them with some probability. As a result of the experiment, the value X will take one of these values, i.e. One of the complete group of incompatible events will occur:



    Let us denote the probabilities of these events by the letters p with the corresponding indices:

    Since incompatible events (5.1.1) form a complete group, then

    those. the sum of the probabilities of all possible values ​​of a random variable is equal to one. This total probability is somehow distributed among the individual values. The random variable will be fully described from a probabilistic point of view if we specify this distribution, i.e. Let us indicate exactly what probability each of the events (5.1.1) has. With this we will establish the so-called law of distribution of a random variable.

    The law of distribution of a random variable is any relationship that establishes a connection between the possible values ​​of a random variable and the corresponding probabilities. We will say about a random variable that it is subject to a given distribution law.

    Let us establish the form in which the distribution law of a discontinuous random variable can be specified. The simplest form The definition of this law is a table that lists the possible values ​​of the random variable and the corresponding probabilities:

    We will call such a table a distribution series of a random variable.

    To give the distribution series a more visual appearance, they often resort to its graphical representation: the possible values ​​of the random variable are plotted along the abscissa axis, and the probabilities of these values ​​are plotted along the ordinate axis. For clarity, the resulting points are connected by straight segments. Such a figure is called a distribution polygon (Fig. 5.1.1). The distribution polygon, like the distribution series, completely characterizes the random variable; it is one of the forms of the law of distribution.

    Sometimes the so-called “mechanical” interpretation of the distribution series is convenient. Let us imagine that a certain mass equal to one is distributed along the abscissa axis in such a way that the masses are concentrated at individual points, respectively. Then the distribution series is interpreted as a system of material points with some masses located on the abscissa axis.

    Let's consider several examples of discontinuous random variables with their distribution laws.

    Example 1. One experiment is performed in which the event may or may not appear. The probability of the event is 0.3. A random variable is considered - the number of occurrences of an event in a given experiment (i.e. a characteristic random variable of an event, taking the value 1 if it appears, and 0 if it does not appear). Construct a distribution series and a magnitude distribution polygon.

    Solution. The value has only two values: 0 and 1.

    The distribution polygon is shown in Fig. 5.1.2.

    Example 2. A shooter fires three shots at a target. The probability of hitting the target with each shot is 0.4. For each hit the shooter gets 5 points. Construct a distribution series for the number of points scored.

    Solution. Let us denote the number of points scored. Possible values: .

    We find the probability of these values ​​using the theorem on repetition of experiments:

    The value distribution series has the form:

    The distribution polygon is shown in Fig. 5.1.3.

    Example 3. The probability of an event occurring in one experiment is equal to . A series of independent experiments are carried out, which continue until the first occurrence of the event, after which the experiments are stopped. Random variable – the number of experiments performed. Construct a distribution series for the quantity .

    Solution. Possible values: 1, 2, 3, ... (theoretically they are not limited by anything). In order for a quantity to take on the value 1, it is necessary that the event occur in the first experiment; the probability of this is equal. In order for a quantity to take on the value 2, it is necessary that the event does not appear in the first experiment, but does appear in the second; the probability of this is equal to , where , etc. The value distribution series has the form:

    The first five ordinates of the distribution polygon for the case are shown in Fig. 5.1.4.

    Example 4. A shooter shoots at a target until the first hit, having 4 rounds of ammunition. The probability of a hit for each shot is 0.6. Construct a distribution series for the amount of ammunition remaining unspent.

    Solution. The random variable - the number of unspent cartridges - has four possible values: 0, 1, 2 and 3. The probabilities of these values ​​are equal, respectively:

    The value distribution series has the form:

    The distribution polygon is shown in Fig. 5.1.5.

    Example 5. A technical device can be used in different conditions and, depending on this, requires adjustment from time to time. When using the device once, it may randomly fall into a favorable or unfavorable mode. In favorable mode, the device can withstand three uses without adjustment; before the fourth it has to be adjusted. In unfavorable mode, the device must be adjusted after the first use. The probability that the device will fall into a favorable mode is 0.7, and that it will fall into an unfavorable mode is 0.3. A random variable is considered - the number of uses of the device before adjustment. Construct its distribution series.

    Solution. The random variable has three possible values: 1, 2 and 3. The probability that , is equal to the probability that the first time the device is used, it will fall into an unfavorable mode, i.e. . In order for the value to take the value 2, the device must be in a favorable mode during the first use, and in an unfavorable mode during the second use; the likelihood of this . For the value to take the value 3, the device must be in a favorable mode the first two times (after the third time it will still have to be adjusted). The probability of this is equal .

    The value distribution series has the form:

    The distribution polygon is shown in Fig. 5.1.6.


    Distribution function

    In the previous n° we introduced the distribution series as an exhaustive characteristic (distribution law) of a discontinuous random variable. However, this characteristic is not universal; it exists only for discontinuous random variables. It is easy to see that it is impossible to construct such a characteristic for a continuous random variable. Indeed, a continuous random variable has an infinite number of possible values, completely filling a certain interval (the so-called “countable set”). It is impossible to create a table listing all possible values ​​of such a random variable. Moreover, as we will see later, each individual value of a continuous random variable usually does not have any nonzero probability. Consequently, for a continuous random variable there is no distribution series in the sense in which it exists for a discontinuous variable. However, different areas of possible values ​​of a random variable are still not equally probable, and for a continuous variable there is a “probability distribution,” although not in the same sense as for a discontinuous one.

    To quantitatively characterize this probability distribution, it is convenient to use the improbability of the event , and the probability of the event , where is some current variable. The probability of this event obviously depends on , there is some function of . This function is called the distribution function of a random variable and is denoted by:

    . (5.2.1)

    The distribution function is sometimes also called the cumulative distribution function or the cumulative distribution law.

    The distribution function is the most universal characteristic of a random variable. It exists for all random variables: both discontinuous and continuous. The distribution function fully characterizes a random variable from a probabilistic point of view, i.e. is one of the forms of the distribution law.

    Let us formulate some general properties of the distribution function.

    1. The distribution function is a non-decreasing function of its argument, i.e. at .

    2. At minus infinity, the distribution function is equal to zero: .

    3. At plus infinity, the distribution function is equal to one: .

    Without giving a rigorous proof of these properties, we will illustrate them using a visual geometric interpretation. To do this, we will consider a random variable as a random point on the Ox axis (Fig. 5.2.1), which as a result of experiment can take one position or another. Then the distribution function is the probability that a random point as a result of the experiment will fall to the left of the point.

    We will increase , that is, move the point to the right along the abscissa axis. Obviously, in this case, the probability that a random point will fall to the left cannot decrease; therefore, the distribution function cannot decrease with increasing.

    To make sure that , we will move the point to the left along the abscissa indefinitely. In this case, hitting a random point to the left in the limit becomes an impossible event; It is natural to believe that the probability of this event tends to zero, i.e. .

    In a similar way, moving the point to the right without limit, we make sure that , since the event becomes reliable in the limit.

    Distribution function graph in general case is a graph of a non-decreasing function (Fig. 5.2.2), the values ​​of which start from 0 and reach 1, and at certain points the function may have jumps (discontinuities).

    Knowing the distribution series of a discontinuous random variable, one can easily construct the distribution function of this variable. Really,

    ,

    where the inequality under the sum sign indicates that the summation applies to all those values ​​that are less than .

    When the current variable passes through any of the possible values ​​of the discontinuous value, the distribution function changes abruptly, and the magnitude of the jump is equal to the probability of this value.

    Example 1. One experiment is performed in which the event may or may not appear. The probability of the event is 0.3. Random variable – the number of occurrences of an event in an experiment (characteristic random variable of an event). Construct its distribution function.

    Experience is any implementation of certain conditions and actions under which the random phenomenon being studied is observed. Experiments can be characterized qualitatively and quantitatively. A random quantity is a quantity that, as a result of experiment, can take on one or another value, and it is not known in advance which one.

    Random variables are usually denoted (X,Y,Z), and the corresponding values ​​(x,y,z)

    Discrete are random variables that take individual values ​​isolated from each other that can be overestimated. Continuous quantities the possible values ​​of which continuously fill a certain range. The distribution law of a random variable is any relation that establishes a connection between the possible values ​​of random variables and the corresponding probabilities. Distribution row and polygon. The simplest form of the distribution law discrete value is the distribution series. The graphical interpretation of the distribution series is the distribution polygon.

    You can also find the information you are interested in in the scientific search engine Otvety.Online. Use the search form:

    More on topic 13. Discrete random variable. Distribution polygon. Operations with random variables, example:

    1. 13. Discrete random variable and the law of its distribution. Distribution polygon. Operations with random variables. Example.
    2. The concept of “random variable” and its description. Discrete random variable and its law (series) of distribution. Independent random variables. Examples.
    3. 14. Random variables, their types. The law of probability distribution of a discrete random variable (DRV). Methods for constructing random variables (SV).
    4. 16. Distribution law of a discrete random variable. Numerical characteristics of a discrete random variable: mathematical expectation, dispersion and standard deviation.
    5. Mathematical operations on discrete random variables and examples of constructing distribution laws for KX, X"1, X + K, XV based on given distributions of independent random variables X and Y.
    6. The concept of a random variable. Law of distribution of discrete cases. quantities. Mathematical operations on random. quantities.

    Random variables: discrete and continuous.

    When conducting a stochastic experiment, a space of elementary events is formed - possible outcomes this experiment. It is believed that on this space of elementary events there is given random variable X, if a law (rule) is given according to which each elementary event is associated with a number. Thus, the random variable X can be considered as a function defined on the space of elementary events.

    ■ Random variable- a quantity that takes one or another at each test numeric value(it is not known in advance which one), depending on random reasons that cannot be taken into account in advance. Random variables are indicated in capital letters Latin alphabet, and the possible values ​​of the random variable are small. So, when throwing a die, an event occurs associated with the number x, where x is the number of points rolled. The number of points is a random variable, and the numbers 1, 2, 3, 4, 5, 6 are possible values ​​of this value. The distance that a projectile will travel when fired from a gun is also a random variable (depending on the installation of the sight, the strength and direction of the wind, temperature and other factors), and the possible values ​​of this value belong to a certain interval (a; b).

    ■ Discrete random variable– a random variable that takes on separate, isolated possible values ​​with certain probabilities. The number of possible values ​​of a discrete random variable can be finite or infinite.

    ■ Continuous random variable– a random variable that can take all values ​​from some finite or infinite interval. The number of possible values ​​of a continuous random variable is infinite.

    For example, the number of points rolled when throwing a dice, the score for a test are discrete random variables; the distance that a projectile flies when firing from a gun, the measurement error of the indicator of time to master educational material, the height and weight of a person are continuous random variables.

    Distribution law of a random variable– correspondence between possible values ​​of a random variable and their probabilities, i.e. Each possible value x i is associated with the probability p i with which the random variable can take this value. The distribution law of a random variable can be specified tabularly (in the form of a table), analytically (in the form of a formula), and graphically.

    Let a discrete random variable X take values ​​x 1 , x 2 , ..., x n with probabilities p 1 , p 2 , ..., p n respectively, i.e. P(X=x 1) = p 1, P(X=x 2) = p 2, …, P(X=x n) = p n. When specifying the distribution law of this quantity in a table, the first row of the table contains the possible values ​​x 1 , x 2 , ..., x n , and the second row contains their probabilities

    X x 1 x 2 x n
    p p 1 p2 p n

    As a result of the test, a discrete random variable X takes on one and only one of the possible values, therefore the events X=x 1, X=x 2, ..., X=x n form a complete group of pairwise incompatible events, and, therefore, the sum of the probabilities of these events is equal to one , i.e. p 1 + p 2 +… + p n =1.

    Distribution law of a discrete random variable. Distribution polygon (polygon).

    As you know, a random variable is a variable that can take on certain values ​​depending on the case. Random variables denote in capital letters Latin alphabet (X, Y, Z), and their meanings - in the corresponding lowercase letters (x, y, z). Random variables are divided into discontinuous (discrete) and continuous.

    A discrete random variable is a random variable that takes only a finite or infinite (countable) set of values ​​with certain non-zero probabilities.

    Distribution law of a discrete random variable is a function that connects the values ​​of a random variable with their corresponding probabilities. The distribution law can be specified in one of the following ways.

    1. The distribution law can be given by the table:

    where λ>0, k = 0, 1, 2, … .

    c) using the distribution function F(x), which determines for each value x the probability that the random variable X will take a value less than x, i.e. F(x) = P(X< x).

    Properties of the function F(x)

    3. The distribution law can be specified graphically - by a distribution polygon (polygon) (see task 3).

    Note that to solve some problems it is not necessary to know the distribution law. In some cases, it is enough to know one or more numbers that reflect the most important features distribution law. This may be a number that has the meaning of the "average" of a random variable, or a number indicating medium size deviation of a random variable from its mean value. Numbers of this kind are called numerical characteristics of a random variable.

    Basic numerical characteristics of a discrete random variable:

    • Mathematical expectation (average value) of a discrete random variable M(X)=Σ x i p i .
      For binomial distribution M(X)=np, for Poisson distribution M(X)=λ
    • Dispersion of a discrete random variable D(X)= M 2 or D(X) = M(X 2)− 2. The difference X–M(X) is called the deviation of a random variable from its mathematical expectation.
      For binomial distribution D(X)=npq, for Poisson distribution D(X)=λ
    • Standard deviation ( standard deviation) σ(X)=√D(X).

    · For clarity of presentation of the variation series great value have graphic images of it. Graphically, a variation series can be depicted as a polygon, histogram and cumulate.

    · A distribution polygon (literally a distribution polygon) is called a broken line, which is constructed in a rectangular coordinate system. The value of the attribute is plotted on the abscissa, the corresponding frequencies (or relative frequencies) - on the ordinate. Points (or) are connected by straight line segments and a distribution polygon is obtained. Most often, polygons are used to depict discrete variation series, but they can also be used for interval series. In this case, the points corresponding to the midpoints of these intervals are plotted on the abscissa axis.



    New on the site

    >

    Most Popular