Accumulation of errors. Mathematical encyclopedia what is the accumulation of errors, what does it mean and how to spell it correctly

Analytical Chemistry

UDC 543.08+543.422.7

PREDICTION OF PHOTOMETRY ERRORS USING THE LAW OF ERRORS ACCUMULATION AND THE MONTE CARLO METHOD

IN AND. Golovanov, EM Danilina

In a computational experiment, with a combination of the law of propagation of errors and the Monte Carlo method, the influence of errors in the preparation of solutions, errors in a blank experiment, and transmission measurement errors on the metrological characteristics of photometric analysis was studied. It is found that the results of predicting errors by analytical and statistical methods are mutually consistent. It is shown that a feature of the Monte Carlo method is the possibility of predicting the distribution law of errors in photometry. On the example of a routine analysis scenario, the influence of the heteroscedasticity of the spread along the calibration curve on the quality of the analysis is considered.

Keywords: photometric analysis, error accumulation law, calibration graph, metrological characteristics, Monte Carlo method, stochastic simulation.

Introduction

Prediction of photometric analysis errors is mainly based on the use of the error accumulation law (ELL). For the case of a linear form of the law of light absorption: - 1§T \u003d A \u003d b1s, ZNO is usually written by the equation:

8A _ 8C _ 0.434-10^

A ‘8T-

In this case, the standard deviation of the measurement of the degree of transmission is assumed to be constant over the entire dynamic range of the photometer. At the same time, as noted in , in addition to instrumental errors, the accuracy of the analysis is affected by the error of a blank experiment, the error in setting the instrument scale limits, the cuvette error, chemical factors, and the error in setting the analytical wavelength. These factors are considered the main sources of error in the analysis result. Contributions to the accumulated error in the accuracy of the preparation of calibration solutions are usually neglected.

From this we see that equation (1) does not have significant prognostic power, since it takes into account the influence of only one factor. In addition, equation (1) is a consequence of the approximate expansion of the law of light absorption in a Taylor series. This raises the question of its accuracy, due to the neglect of the expansion terms above the first order. Mathematical analysis of decomposition residues is associated with computational difficulties and is not used in practice of chemical analysis.

The purpose of this work is to study the possibility of using the Monte Carlo method (method of statistical tests) as an independent method for studying and predicting the accumulation of errors in photometric analysis, which complements and deepens the capabilities of ZNO.

Theoretical part

In this work, we will assume that the final random error of the calibration function is due not only to instrumental errors in measuring optical density, but also to errors in setting the instrument scale to 0 and 100% transmission (the error of

simple experiment), as well as errors in the preparation of calibration solutions. We neglect the other sources of errors mentioned above. Then we rewrite the equation of the Bouguer-Lambert-Beer law in a form convenient for further construction:

Ay \u003d ks " + A

In this equation, c51 is the concentration of the head standard solution of a colored substance, aliquots (Ya) of which are diluted in flasks with a nominal volume of Vsp to obtain a calibration series of solutions, Ay is the optical density of a blank experiment solution. Since, during photometry, the optical density of the tested solutions is measured relative to the blank solution, i.e., Ay is taken as conditional zero, then Ay = 0. (Note that the value of optical density measured in this case can be called conditional extinction.) In equation (2), the dimensionless quantity c" has the meaning of the concentration of the working solution, expressed in units of the concentration of the parent standard. We call the coefficient k the extinction of the standard, since Ag1 = e1c81 at c" = 1.

Let us apply to expression (2) the operator of the law of accumulation of random errors, assuming Va, Yd, and Ay to be random variables. We get:

Another independent random variable that affects the spread of A values ​​is the degree of transmission, since

A = -1§T, (4)

therefore, we add one more term to the dispersions on the left side of Eq. (3):

52a \u003d (0.434-10a) H + 8Іbі +

In this final record of the law of accumulation of errors, the absolute standard deviations of T, Ay and Yd are constant, and for Va the relative standard error is constant.

When constructing a stochastic model of the calibration function based on the Monte Carlo method, we consider that the possible values ​​x * of the random variables T, Ay, Ua and Yd are distributed according to the normal law. According to the Monte Carlo principle, we will play the possible values ​​using the inverse function method:

x; \u003d M (x1) + p-1 (r]) - inX |, (6)

where M(x) is the expectation (actual value) of the variable, ¥(r^) is the Laplace-Gauss function, q are the possible values ​​of the random variable R uniformly distributed over the interval (0,1), i.e. random numbers, sx - standard deviation of the corresponding variable, \ = 1...m - ordinal number of an independent random variable. After substituting expression (6) into equations (4) and (2), we have:

A" \u003d -18Xi \u003d -1810-a + P-1 (g]) 8t,

where A" = "k-+ x2

Calculations according to equation (7) return a separate implementation of the calibration function, i.e. dependence A" on the mathematical expectation M(s") (nominal value c"). Therefore, the record (7) is an analytical expression of a random function. The cross sections of this function are obtained by repeatedly playing random numbers at each point of the calibration dependence. statistics for the purpose of estimating the general parameters of calibration and testing hypotheses about the properties of the general population.

Obviously, the two approaches we are considering to the problem of predicting metrological characteristics in photometry - based on the ZNO, on the one hand, and based on the Monte Carlo method, on the other, should complement each other. In particular, from equation (5) one can obtain a result with a much smaller amount of calculations compared to (7), as well as ranking

calculate random variables by the significance of their contributions to the resulting error. Ranking allows you to abandon the screening experiment in statistical tests and a priori exclude insignificant variables from consideration. Equation (5) is easy to analyze mathematically in order to judge the nature of the contributions of factors to the total variance. Partial contributions of factors can be subdivided into independent of A, or increasing with increasing optical density. Therefore, sA as a function of A must be a monotonically increasing dependence without a minimum. When approximating the experimental data by equation (5), partial contributions of the same nature will be mixed, for example, the single error can be mixed with the error of a blank experiment. On the other hand, when statistically testing the model using the Monte Carlo method, it is possible to identify such important properties of the calibration graph as the law (laws) of the distribution of errors, as well as to evaluate the speed of convergence of sample estimates to general ones. On the basis of ZNO, such an analysis is impossible.

Description of the computational experiment

When constructing a simulation model for calibration, we assume that the calibration series of solutions was prepared in volumetric flasks with a nominal capacity of 50 ml and a maximum error of +0.05 ml. To a series of flasks, add from 1 to 17 ml of stock standard solution with a pipetting error of > 1%. Volume measurement errors were evaluated according to the reference book. Aliquots are added in 1 ml increments. In total, there are 17 solutions in the series, the optical density of which covers the range from 0.1 to 1.7 units. Then in equation (2) the coefficient k = 5. The error of a blank experiment is taken at the level of 0.01 units. optical density. The errors in measuring the degree of transmission, according to , depend only on the class of the device and are in the range from 0.1 to 0.5% T.

For greater binding of the conditions of the computational experiment to the laboratory experiment, we used data on the reproducibility of measurements of the optical densities of K2Cr2O7 solutions in the presence of 0.05 M H2SO4 on an SF-26 spectrophotometer. The authors approximate the experimental data on the interval A = 0.1 ... 1.5 by the parabola equation:

sBOCn*103 = 7.9-3.53A + 10.3A2. (8)

We managed to fit the calculations according to the theoretical equation (5) to the calculations according to the empirical equation (8) using Newton's optimization method. We found that equation (5) satisfactorily describes the experiment at s(T) = 0.12%, s(Abi) = 0.007, and s r(Va) = 1.1%.

The independent error estimates given in the previous paragraph are in good agreement with those found during the fitting. For calculations according to equation (7), a program was created in the form of a sheet of MS Excel spreadsheets. The most significant feature of our Excel program is the use of NORMINV(RAND()) to generate normally distributed errors, see equation (6). In the special literature on statistical calculations in Excel, the Random Number Generation utility is described in detail, which in many cases is preferable to replace with functions of the NORMINV(RAND()) type. Such a replacement is especially convenient when creating your own Monte Carlo simulation programs.

Results and its discussion

Before proceeding to statistical tests, let us estimate the contributions of the terms on the left side of Eq. (5) to the total optical density dispersion. To do this, each term is normalized to the total variance. The calculations were performed at s(T) = 0.12%, s(Aw) = 0.007, Sr(Va)=l.l %, and s(Vfi) = 0.05. The calculation results are shown in fig. 1. We see that the contributions to the total variance of measurement errors Vfl can be neglected.

Whereas the contributions of another value Va

dominate in the range of optical densities 0.8__1.2. However, this conclusion is not general.

nature, since when measuring on a photometer with s(T) = 0.5%, the calibration errors, according to the calculation, are determined mainly by the scatter of Ay and the scatter of T. In fig. 2 compares the relative errors of the optical density measurements predicted by the CLN (solid line) and the Monte Carlo method (icons). In statistical tests, the curve

errors were reconstructed from 100 realizations of the calibration dependence (1700 values ​​of optical densities). We see that both predictions are mutually consistent. The points are uniformly grouped around the theoretical curve. However, even with such a rather impressive statistical material, complete convergence is not observed. In any case, the scatter does not allow revealing the approximate nature of the STD, see the introduction.

0 0.4 0.8 1.2 1.6

Rice. 1. Weighted contributions of the terms of equation (5) to the variance A: 1 - for Ay; 2 - for Wah; 3 - for T; 4 - for

Rice. 2. Curve of errors of the calibration graph

It is known from the theory of mathematical statistics that with interval estimation of the mathematical expectation of a random variable, the reliability of estimation increases if the distribution law for this variable is known. In addition, in the case of a normal distribution, the estimate is the most efficient. Therefore, the study of the law of distribution of errors in the calibration graph is an important task. In such a study, first of all, the hypothesis of the normality of the spread of optical densities at individual points of the graph is tested.

A simple way to test the main hypothesis is to calculate the skewness coefficients (a) and kurtosis coefficients (e) of empirical distributions, as well as their comparison with criterion values. The reliability of statistical inference increases with an increase in the volume of sample data. On fig. 3 shows sequences of coefficients for 17 sections of the calibration function. The coefficients are calculated from the results of 100 tests at each point. The critical values ​​of the coefficients for our example are |a| = 0.72 and |e| = 0.23.

From fig. 3, we can conclude that the dispersion of values ​​at the points of the graph, in general, does not

contradicts the normality hypothesis, since the sequences of coefficients have almost no preferred directionality. The coefficients are randomly localized near the zero line (shown by the dotted line). For a normal distribution, as is known, the expectation of the skewness coefficient and the kurtosis coefficient is zero. Judging by the fact that for all sections the asymmetry coefficients are significantly lower than the critical value, we can confidently speak about the symmetry of the distribution of calibration errors. It is possible that the error distributions are slightly pointed compared to the normal distribution curve. This conclusion follows from what is observed in Fig. 3 small poles

Rice. 3. Kurtosis coefficients (1) and skewness coefficients (2) at the points of the calibration graph

living shift of the central line of scattering coefficients of kurtosis. Thus, from the study of the model of the generalized calibration function of photometric analysis by the Monte Carlo method (2), we can conclude that the distribution of calibration errors is close to normal. Therefore, the calculation of confidence intervals for the results of photometric analysis using Student's coefficients can be considered quite justified.

When performing stochastic modeling, the rate of convergence of sample error curves (see Fig. 2) to the mathematical expectation of the curve was estimated. For the mathematical expectation of the error curve, we take the curve calculated from the ZNO. The closeness of the results of statistical tests with a different number of implementations of the calibration n to the theoretical curve will be estimated by the uncertainty coefficient 1 - R2. This coefficient characterizes the proportion of variation in the sample, which could not be described theoretically. We have established that the dependence of the uncertainty coefficient on the number of implementations of the calibration function can be described by the empirical equation I - K2 = -2.3n-1 + 1.6n~/a -0.1. From the equation we obtain that at n = 213 one should expect almost complete coincidence of the theoretical and empirical error curves. Thus, a consistent estimate of the errors of photometric analysis can be obtained only on a fairly large statistical material.

Let us consider the possibilities of the statistical test method for predicting the results of regression analysis of a calibration curve and using the curve to determine the concentrations of photometered solutions. To do this, we choose the measurement situation of routine analysis as a scenario. The construction of the graph is carried out with single measurements of the optical densities of a series of standard solutions. The concentration of the analyzed solution is found from the graph according to 3-4 results of parallel measurements. When choosing a regression model, one should take into account the fact that the spread of optical densities at different points of the calibration curve is not the same, see equation (8). In the case of heterocedastic scatter, it is recommended to use a weighted least squares (LLS) scheme. However, in the literature, we did not find clear indications of the reasons why the classical LSM scheme, one of the conditions for the applicability of which is the requirement that the spread be homoscedastic, is less preferable. These reasons can be established when processing the same statistical material obtained by the Monte Carlo method according to the scenario of routine analysis, with two versions of the least squares - classical and weighted.

As a result of the regression analysis of only one implementation of the calibration function, the following least squares estimates were obtained: k = 4.979 with Bk = 0.023. When evaluating the same characteristics of HMNC, we obtain k = 5.000 with Bk = 0.016. Regressions were restored using 17 standard solutions. The concentrations in the calibration series increased in arithmetic progression, and the optical densities changed just as uniformly in the range from 0.1 to 1.7 units. In the case of HMLC, the statistical weights of the points of the calibration curve were found using the dispersions calculated by equation (5).

The variances of estimates for both methods are statistically indistinguishable by Fisher's test at a 1% significance level. However, at the same level of significance, the LLS estimate of k differs from the LLS estimate by the 1j-criterion. The least squares estimate of the coefficient of the calibration curve is biased relative to the actual value of M(k) = 5.000, judging by the 1> test at a 5% significance level. Whereas the weighted least squares gives an estimate that does not contain a systematic error.

Let us now find out how the neglect of heteroscedasticity can affect the quality of chemical analysis. The table shows the results of a simulation experiment on the analysis of 17 control samples of a colored substance with different concentrations. Moreover, each analytical series included four solutions, i.e. for each sample, four parallel determinations were made. To process the results, two different calibration dependences were used: one was restored by a simple least squares method, and the second by a weighted one. We believe that control solutions were prepared for analysis in exactly the same way as calibration solutions.

From the table we can see that the actual values ​​of the concentrations of control solutions both in the case of HMNC and in the case of MNC do not go beyond the confidence intervals, i.e., the analysis results do not contain significant systematic errors. The marginal errors of both methods do not differ statistically, in other words, both estimates

Comparison of the results of determination of concentrations has the same efficiency. From-

control solutions by two methods, here we can conclude that when

In routine analyses, the use of a simple unweighted least squares scheme is fully justified. The use of WMNC is preferable if the research task is only to determine the molar extinction. On the other hand, it should be borne in mind that our conclusions are of a statistical nature. It is likely that with an increase in the number of parallel determinations, the hypothesis of unbiased least squares concentration estimates will not be confirmed, even if systematic errors are insignificant from a practical point of view.

The sufficiently high quality of analysis based on a simple classical least squares scheme that we found seems especially unexpected if we take into account the fact that very strong heteroskedasticity is observed in the optical density range 0.1 h - 1.7. The degree of data heterogeneity can be judged by the weight function, which is well approximated by the polynomial w = 0.057A2 - 0.193A + 0.173. It follows from this equation that at the extreme points of the calibration, the statistical weights differ by more than 20 times. However, let us pay attention to the fact that the calibration functions were reconstructed from 17 points of the graph, while only 4 parallel determinations were performed during the analysis. Therefore, the significant difference between the least squares and HLLS calibration functions that we found and the slight difference in the results of analysis using these functions can be explained by the significantly different number of degrees of freedom that were available when constructing statistical conclusions.

Conclusion

1. A new approach to stochastic modeling in photometric analysis is proposed based on the Monte Carlo method and the error accumulation law using an Excel spreadsheet.

2. Based on 100 implementations of the calibration dependence, it is shown that the prediction of errors by the analytical and statistical methods are mutually consistent.

3. The coefficients of asymmetry and kurtosis along the calibration curve were studied. It is found that the variations in calibration errors obey a distribution law close to normal.

4. The effect of heteroscedasticity of the spread of optical densities during calibration on the quality of analysis is considered. It was found that in routine analyzes, the use of a simple unweighted least squares scheme does not lead to a noticeable decrease in the accuracy of the analysis results.

Literature

1. Bernstein, I.Ya. Spectrophotometric analysis in organic chemistry / I.Ya. Bernstein, Yu.L. Kaminsky. - L.: Chemistry, 1986. - 200 p.

2. Bulatov, M.I. A practical guide to photometric methods of analysis / M.I. Bulatov, I.P. Kalinkin. - L.: Chemistry, 1986. - 432 p.

3. Gmurman, V.E. Probability theory and mathematical statistics / V.E. Gmurman. - M.: Higher school, 1977. - 470 p.

No. s", s", found (P = 95%)

n/i set by OLS VMNK

1 0.020 0.021±0.002 0.021±0.002

2 0.040 0.041±0.001 0.041±0.001

3 0.060 0.061±0.003 0.061±0.003

4 0.080 0.080±0.004 0.080±0.004

5 0.100 0.098±0.004 0.098±0.004

6 0.120 0.122±0.006 0.121±0.006

7 0.140 0.140±0.006 0.139±0.006

8 0.160 0.163±0.003 0.162±0.003

9 0.180 0.181±0.006 0.180±0.006

10 0.200 0.201±0.002 0.200±0.002

11 0.220 0.219±0.008 0.218±0.008

12 0.240 0.242±0.002 0.241±0.002

13 0.260 0.262±0.008 0.261±0.008

14 0.280 0.281±0.010 0.280±0.010

15 0.300 0.307±0.015 0.306±0.015

16 0.320 0.325±0.013 0.323±0.013

17 0.340 0.340±0.026 0.339±0.026

4. Pravdin, P.V. Laboratory instruments and equipment made of glass / P.V. Pravdin. - M.: Chemistry, 1988.-336 p.

5. Makarova, N.V. Statistics in Excel / N.V. Makarova, V.Ya. Trofimets. - M.: Finance and statistics, 2002. - 368 p.

PREDICTION OF ERRORS IN PHOTOMETRY WITH THE USE OF ACCUMULATION OF ERRORS LAW AND MONTE CARLO METHOD

During computing experiment, in combination of the accumulation of errors law and Monte Carlo method, the influence of solution-making errors, blank experiment errors and optical transmission measurement errors upon metrological performance of photometrical analysis has been studied. It has been shown that the results of prediction by analytical and statistical methods are interconsistent. The unique feature of Monte Carlo method has been found to enable prediction of the accumulations of errors law in photometry. For the version of routine analysis the influence of heteroscedasticity of dispersion along calibration curve upon analysis of quality has been studied.

Keywords: photometric analysis, accumulation of errors law, calibration curve, metrological performance, Monte Carlo method, stochastic modeling.

Golovanov Vladimir Ivanovich - Dr. Sc. (Chemistry), Professor, Head of the Analytical Chemistry Subdepartment, South Ural State University.

Golovanov Vladimir Ivanovich - Doctor of Chemical Sciences, Professor, Head of the Department of Analytical Chemistry, South Ural State University.

Email: [email protected]

Danilina Elena Ivanovna - PhD (Chemistry), Associate Professor, Analytical Chemistry Subdepartment, South Ural State University.

Danilina Elena Ivanovna - PhD (Chemistry), Associate Professor, Department of Analytical Chemistry, South Ural State University.

in the numerical solution of algebraic equations - the total effect of roundings made at individual steps of the computational process on the accuracy of the resulting solution of a linear algebraic equation. systems. The most common method for a priori estimation of the total influence of roundoff errors in numerical methods of linear algebra is the so-called scheme. reverse analysis. As applied to the solution of a system of linear algebraic equations

the reverse analysis scheme is as follows. The xui solution calculated by the direct method does not satisfy (1), but can be represented as an exact solution of the perturbed system

The quality of the direct method is estimated by the best a priori estimate that can be given for the norms of the matrix and vector . Such "best" and called. respectively, the matrix and vector of the equivalent perturbation for the method M.

If estimates for and are available, then theoretically the error of the approximate solution can be estimated by the inequality

Here is the condition number of the matrix A, and the matrix norm in (3) is assumed to be subordinate to the vector norm

In reality, the estimate for is rarely known, and the main meaning of (2) is the ability to compare the quality of different methods. Below is the form of some typical estimates for the matrix For methods with orthogonal transformations and floating point arithmetic (in system (1) A and b are considered valid)

In this estimate, the relative accuracy of arithmetic. computer operations, is the Euclidean matrix norm, f(n) is a function of the form , where n is the order of the system. The exact values ​​of the constant C of the exponent k are determined by such details of the computational process as the method of rounding, the use of the accumulation of scalar products, etc. Most often, k=1 or 3/2.

In the case of Gauss-type methods, the right side of estimate (4) also includes the factor , which reflects the possibility of growth of the elements of the matrix Ana at intermediate steps of the method compared to the initial level (such growth is absent in orthogonal methods). To reduce the value of , various methods of choosing the leading element are used, which prevent the increase in the elements of the matrix.

For square root method, which is usually used in the case of a positive-definite matrix A, the strongest estimate is obtained

There are direct methods (Jordan, bordering, conjugate gradients) for which the direct application of the inverse analysis scheme does not lead to efficient estimates. In these cases, in the study of N. p., other considerations are also applied (see -).

Lit.: Givens W., "TJ. S. Atomic Energy Commiss. Repts. Ser. OR NL", 1954, No. 1574; Wilkinson, J. H., Rounding errors in algebraic processes, L., 1963; Wilkinson J.

X. D. Ikramov.

N. p. rounding off or method errors arise when solving problems where the solution is the result of a large number of sequentially performed arithmetic. operations.

A significant part of such problems is connected with the solution of algebraic problems. problems, linear or non-linear (see above). In turn, among the algebraic problems, the most common problems arise when approximating differential equations. These tasks are characterized by certain specific features. peculiarities.

The N. P. of the method of solving a problem follows the same or simpler laws as the N. P. of computational error; N., p. method is investigated when evaluating the method for solving the problem.

When studying the accumulation of computational errors, two approaches are distinguished. In the first case, it is considered that the computational errors at each step are introduced in the most unfavorable way and a majorant error estimate is obtained. In the second case, these errors are considered to be random with a certain distribution law.

The nature of the N. p. depends on the problem being solved, the method of solution, and a number of other factors that at first glance may seem insignificant; this includes the form of writing numbers in a computer (fixed-point or floating-point), the order of execution of arithmetic. operations, etc. For example, in the problem of calculating the sum of N numbers

the order in which the operations are performed is important. Let the calculations be performed on a floating point machine with t bits and all numbers lie within . When directly calculated using the recursive formula, the majorant error estimate is of the order 2-tN. You can do otherwise (see). When calculating pairwise sums (If N=2l+1 odd) suppose . Next, their pairwise sums are calculated, and so on.

obtain a majorant error estimate of the order

In typical problems, the quantities a t are calculated according to formulas, in particular recurrent ones, or are sequentially entered into the main memory of the computer; in these cases, the application of the described technique leads to an increase in the load on the computer memory. However, it is possible to organize the sequence of calculations in such a way that the RAM load will not exceed -log 2 N cells.

In the numerical solution of differential equations, the following cases are possible. As the grid step h tends to zero, the error grows as where . Such methods for solving problems are classified as unstable. Their use is episodic. character.

Stable methods are characterized by an increase in error as The error of such methods is usually estimated as follows. An equation is constructed with respect to the perturbation introduced either by rounding off or by the errors of the method, and then the solution of this equation is investigated (see , ).

In more complex cases, the method of equivalent perturbations (see , ) is used, developed in relation to the problem of studying the accumulation of computational errors in solving differential equations (see , , ). Calculations according to some calculation scheme with roundings are considered as calculations without roundings, but for an equation with perturbed coefficients. By comparing the solution of the original grid equation with the solution of the equation with perturbed coefficients, an error estimate is obtained.

Considerable attention is paid to the choice of a method, if possible, with smaller values ​​of q and A(h) . With a fixed method for solving the problem, the calculation formulas can usually be converted to the form where (see , ). This is especially important in the case of ordinary differential equations, where the number of steps in some cases turns out to be very large.

The value of (h) can grow strongly with an increase in the interval of integration. Therefore, they try to apply methods, if possible, with a smaller value of A(h) . In the case of the Cauchy problem, the rounding error at each specific step with respect to subsequent steps can be considered as an error in the initial condition. Therefore, the infimum (h) depends on the characteristic of the divergence of close solutions of the differential equation defined by the variational equation.

In the case of a numerical solution of an ordinary differential equation the equation in variations has the form

and therefore, when solving the problem on the segment ( x 0 , X) one cannot rely on the constant A(h) in the majorant estimate of the computational error to be significantly better than

Therefore, when solving this problem, one-step methods of the Runge-Kutta type or methods of the Adams type (see , ), where the N.p. is mainly determined by the solution of the equation in variations, are most commonly used.

For a number of methods, the main term of the method error accumulates according to a similar law, while the computational error accumulates much faster (see ). Practical area applicability of such methods turns out to be significantly narrower.

The accumulation of the computational error essentially depends on the method used to solve the grid problem. For example, when solving grid boundary value problems corresponding to ordinary differential equations by shooting and sweeping methods, the N. p. has the character A(h) h-q, where q is the same. The values ​​of A(h) for these methods may differ so much that in a certain situation one of the methods becomes inapplicable. When solving the grid boundary value problem for the Laplace equation by the shooting method, the N. p. has the character s 1/h , s>1, and in the case of the sweep method Ah-q. In a probabilistic approach to the study of N. p., in some cases, some law of error distribution is a priori assumed (see ), in other cases, a measure is introduced on the space of the problems under consideration, and, based on this measure, the rounding error distribution law is obtained (see , ).

With moderate accuracy in the solution of the problem, majorant and probabilistic approaches to estimating the accumulation of computational errors usually give qualitatively the same results: either in both cases, the N.P. occurs within acceptable limits, or in both cases, the N.P. exceeds such limits.

Lit.: Voevodin V. V., Computational foundations of linear algebra, M., 1977; Shura-Bura M.R., "Applied Mathematics and Mechanics", 1952, vol. 16, no. 5, p. 575-88; Bakhvalov N. S., Numerical methods, 2nd ed., M., 1975; Wilkinson J. X., Algebraic eigenvalue problem, trans. from English, M.. 1970; Bakhvalov N. S., in the book: Computational methods and programming, in. 1, M., 1962, pp. 69-79; Godunov S. K., Ryaben'kii V. S., Difference schemes, 2nd ed., M., 1977; Bakhvalov N. S., "Reports of the Academy of Sciences of the USSR", 1955, vol. 104, no. 5, p. 683-86; his own, "J. Calculate, Mathematics and Mathematics of Physics", 1964; vol. 4, no. 3, p. 399-404; Lapshin E. A., ibid., 1971, vol. 11, No. 6, pp. 1425-36.

  • - deviations of the measurement results of the true values ​​of the measured quantity. Systematic...
  • - metrological deviations. properties or parameters of measuring instruments from funeral ones, affecting the errors of measurement results ...

    Natural science. encyclopedic Dictionary

  • - deviations of the measurement results from the true values ​​of the measured quantity. They play a significant role in the production of a number of forensic examinations ...

    Forensic Encyclopedia

  • - : See also: - errors of measuring instruments - errors of measurements...
  • - Look...

    Encyclopedic Dictionary of Metallurgy

  • - deviations of the metrological parameters of measuring instruments from the nominal ones, affecting the errors of the measurement results ...

    Encyclopedic Dictionary of Metallurgy

  • - "... Periodic errors - errors, the value of which is a periodic function of time or the movement of the pointer of the measuring instrument .....

    Official terminology

  • - "... Constant errors are errors that retain their value for a long time, for example, during the entire series of measurements. They are most common .....

    Official terminology

  • - "... Progressive errors - continuously increasing or decreasing errors ...

    Official terminology

  • - see Observation Errors...

    Encyclopedic Dictionary of Brockhaus and Euphron

  • - measurement errors, deviations of measurement results from the true values ​​of the measured quantities. Distinguish systematic, casual and rough P. and. ...
  • - deviations of the metrological properties or parameters of measuring instruments from the nominal ones, affecting the errors of the measurement results obtained using these instruments ...

    Great Soviet Encyclopedia

  • - the difference between the measurement results and the true value of the measured quantity. The relative measurement error is the ratio of the absolute measurement error to the true value ...

    Modern Encyclopedia

  • - deviations of measurement results from the true values ​​of the measured quantity ...

    Big encyclopedic dictionary

  • - adj., number of synonyms: 3 corrected eliminated inaccuracies eliminated errors ...

    Synonym dictionary

  • - adj., number of synonyms: 4 correcting, eliminating flaws, eliminating inaccuracies, eliminating errors ...

    Synonym dictionary

"ACCUMULATION OF ERROR" in books

Technical errors

From the book Stars and a little nervous author

Technical errors

From the book Vain Perfections and Other Vignettes author Zholkovsky Alexander Konstantinovich

Technical inaccuracies Tales of successfully resisting force are not as far-fetched as we implicitly fear. Hitting usually assumes the passivity of the victim, and therefore it is thought out only one step forward and does not withstand a counterattack. Dad told me about one

Sins and errors

From the book How NASA Showed America the Moon author Rene Ralph

Sins and inaccuracies Despite the fictitious nature of their space navigation, NASA boasted of amazing accuracy in everything it did. Nine times in a row, the Apollo capsules landed perfectly in lunar orbit without the need for major course corrections. Lunar module,

initial accumulation of capital. Forced dispossession of peasants. Accumulation of wealth.

author

initial accumulation of capital. Forced dispossession of peasants. Accumulation of wealth. Capitalist production presupposes two basic conditions: 1) the presence of a mass of poor people, personally free and at the same time deprived of the means of production, and

Socialist accumulation. Accumulation and consumption in a socialist society.

From the book Political Economy author Ostrovityanov Konstantin Vasilievich

Socialist accumulation. Accumulation and consumption in a socialist society. The source of expanded socialist reproduction is socialist accumulation. Socialist accumulation is the use of a part of the net income of society,

Measurement errors

TSB

Errors of measuring instruments

From the book Great Soviet Encyclopedia (PO) of the author TSB

Ultrasound errors

From the book Thyroid Recovery A Guide for Patients author Ushakov Andrey Valerievich

Ultrasound errors When a patient came to me from St. Petersburg for a consultation, I saw three protocols of ultrasound examination at once. All of them were made by different specialists. Described differently. At the same time, the dates of the studies differed from each other by almost

Annex 13 Speech errors

From the book The Art of Getting Your Own author Stepanov Sergey Sergeevich

Appendix 13 Speech errors Even seemingly harmless phrases can often become a serious barrier to promotion. The famous American marketing specialist John R. Graham compiled a list of expressions, the use of which, according to his observations,

Speech errors

From the book How Much Are You Worth [Technology for a Successful Career] author Stepanov Sergey Sergeevich

Speech errors Even seemingly harmless phrases can often become a serious barrier to promotion. The famous American marketing specialist John R. Graham compiled a list of expressions, the use of which, according to his observations, did not allow

fatal errors

From the book The Black Swan [Under the sign of unpredictability] author Taleb Nassim Nicholas

Deadly errors Errors have such a destructive property: the more significant they are, the greater their masking effect. Nobody sees dead rats, and therefore the more deadly the risk, the less obvious it is, because the victims are excluded from the number of witnesses. How

Orientation errors

From the book The ABC of Tourism author Bardin Kirill Vasilievich

Orientation Errors So, a common orientation problem that a tourist has to solve is to get from one point to another using only a compass and a map. The area is unfamiliar and, moreover, closed, that is, devoid of any

Errors: Philosophy

From the author's book

Errors: philosophy On an intuitive level, we understand that our knowledge in many cases is not accurate. We can cautiously assume that our knowledge in general can be accurate only on a discrete scale. You can know exactly how many balls are in the bag, but you can’t know what their weight is,

Uncertainties: Models

From the author's book

Errors: Models When we measure something, it is convenient to represent the information (both conscious and unconscious) available at the time the measurements began in the form of models of an object or phenomenon. The "zero level" model is the model of having a quantity. We believe that she is -

Errors: what and how to control

From the author's book

Errors: what and how to control The choice of controlled parameters, measurement scheme, method and scope of control is made taking into account the output parameters of the product, its design and technology, the requirements and needs of the one who uses the controlled products. Yet again,

Under the measurement error we mean the totality of all measurement errors.

Measurement errors can be classified into the following types:

absolute and relative,

positive and negative,

constant and proportional,

Random and systematic

Absolute error A y) is defined as the difference between the following values:

A y = y i- y ist.  y i- y,

Where: y i is a single measurement result; y ist. – true measurement result; y– arithmetic mean value of the measurement result (hereinafter, the average).

Permanent is called the absolute error, which does not depend on the value of the measured quantity ( yy).

Error proportional , if the named dependency exists. The nature of the measurement error (constant or proportional) is determined after special studies.

Relative error single measurement result ( IN y) is calculated as the ratio of the following quantities:

It follows from this formula that the magnitude of the relative error depends not only on the magnitude of the absolute error, but also on the value of the measured quantity. When the measured value remains unchanged ( y) the relative measurement error can be reduced only by reducing the magnitude of the absolute error ( A y). When the absolute measurement error is constant, to reduce the relative measurement error, you can use the method of increasing the value of the measured quantity.

The sign of the error (positive or negative) is determined by the difference between the single and the obtained (arithmetic mean) measurement result:

y i- y> 0 (error is positive );

y i- y< 0 (error is negative ).

Gross mistake measurement (miss) occurs when the measurement procedure is violated. A measurement result containing a gross error usually differs significantly in magnitude from other results. The presence of gross measurement errors in the sample is established only by methods of mathematical statistics (with the number of measurement repetitions n>2). Get acquainted with the methods for detecting gross errors yourself in.

TO random errors include errors that do not have a constant value and sign. Such errors occur under the influence of the following factors: unknown to the researcher; known but unregulated; constantly changing.

Random errors can only be estimated after measurements have been taken.

The following parameters can be used as a quantitative estimate of the modulus of the magnitude of a random measurement error: the sample variance of single values ​​and the mean value; sample absolute standard deviations of single values ​​and the mean; sample relative standard deviations of single values ​​and the mean; general variance of unit values ​​), respectively, etc.

Random measurement errors cannot be excluded, they can only be reduced. One of the main ways to reduce the amount of random measurement error is to increase the number (sample size) of single measurements (increase in the value n). This is explained by the fact that the magnitude of random errors is inversely proportional to the magnitude n, For example:

.

Systematic errors are errors with constant magnitude and sign or varying according to a known law. These errors are caused by constant factors. Systematic errors can be quantified, reduced, and even eliminated.

Systematic errors are classified into types I, II and III errors.

TO systematic errorsItype refer to errors of known origin, which can be estimated by calculation prior to the measurement. These errors can be eliminated by introducing them into the measurement result in the form of corrections. An example of this type of error is the error in the titrimetric determination of the volume concentration of a solution if the titrant was prepared at one temperature, and the concentration was measured at another. Knowing the dependence of the density of the titrant on temperature, it is possible to calculate the change in the volume concentration of the titrant associated with a change in its temperature before the measurement, and take this difference into account as a correction as a result of the measurement.

SystematicmistakesIItype are errors of known origin that can only be assessed during an experiment or as a result of special studies. This type of error includes instrumental (instrumental), reactive, reference, and other errors. Get acquainted with the features of such errors yourself in.

Any device, when used in the measurement procedure, introduces its instrumental errors into the measurement result. At the same time, some of these errors are random, and the other part is systematic. Random instrument errors are not evaluated separately, they are evaluated together with all other random measurement errors.

Each instance of any instrument has its own personal systematic error. In order to evaluate this error, it is necessary to conduct special studies.

The most reliable way to assess type II instrumental systematic error is to check instrument performance against standards. For measuring utensils (pipette, burette, cylinders, etc.) a special procedure is carried out - calibration.

In practice, most often it is required not to estimate, but to reduce or eliminate type II systematic error. The most common methods for reducing systematic errors are relativization and randomization methods.Check out these methods yourself at .

TO mistakesIIItype include errors of unknown origin. These errors can only be detected after all type I and II systematic errors have been eliminated.

TO other mistakes we will include all other types of errors not considered above (admissible, possible marginal errors, etc.).

The concept of possible marginal errors is used in cases of using measuring instruments and assumes the maximum possible instrumental measurement error (the actual value of the error may be less than the value of the possible marginal error).

When using measuring instruments, it is possible to calculate the possible absolute limit (
) or relative (
) measurement error. So, for example, the possible limiting absolute measurement error is found as the sum of possible limiting random (
) and non-excluded systematic (
) errors:

=
+

For small samples ( n20) of an unknown general population obeying the normal distribution law, random possible marginal measurement errors can be estimated as follows:

= =
,

Where: is the confidence interval for the corresponding probability R;

is the quantile of the Student distribution for the probability R and sample size n or with the number of degrees of freedom f = n – 1.

The absolute possible limiting measurement error in this case will be equal to:

=
+
.

If the measurement results do not obey the normal distribution law, then the error is estimated using other formulas.

Quantity definition
depends on whether the measuring instrument has an accuracy class. If the measuring instrument does not have an accuracy class, then for the value
you can take the minimum price division of the scale(or half of it) means of measurement. For a measuring instrument with a known accuracy class for the value
can be taken as an absolute allowed systematic error of the measuring instrument (
):


.

Value
calculated based on the formulas given in table. 2.

For many measuring instruments, the accuracy class is indicated in the form of numbers A10 n, Where A is equal to 1; 1.5; 2; 2.5; 4; 5; 6 and n is equal to 1; 0; -1; -2, etc., which show the value of the possible maximum permissible systematic error (E y , add.) and special signs indicating its type (relative, reduced, constant, proportional).

If the components of the absolute systematic error of the arithmetic mean of the measurement result are known (for example, instrumental error, method error, etc.), then it can be estimated by the formula

,

Where: m is the number of components of the systematic error of the average measurement result;

k- coefficient determined by the probability R and number m;

is the absolute systematic error of an individual component.

Individual components of the error can be neglected if the appropriate conditions are met.

table 2

Examples of designation of accuracy classes of measuring instruments

Class designation

accuracy

Calculation formula and value of the maximum allowable systematic error

Characteristic of systematic error

in documentation

on the measuring instrument

Reduced allowable systematic error as a percentage of the nominal value of the measured quantity, which is determined by the type of scale of the measuring instrument

The given allowable systematic error as a percentage of the length of the used scale of the measuring instrument (A) when obtaining single values ​​of the measured quantity

Constant relative allowable systematic error as a percentage of the obtained unit value of the measured quantity

c = 0,02; d = 0,01

Proportional relative allowable systematic error in fractions of the obtained unit value of the measured quantity, which increases with an increase in the final value of the measurement range by this measuring instrument ( y k) or a decrease in the unit value of the measured quantity ( y i)

Systematic errors can be neglected if the inequality

0.8.

In this case, take



.

Random errors can be neglected provided

8.

Ad hoc

.

In order for the total measurement error to be determined only by systematic errors, the number of repeated measurements is increased. The minimum number of repeated measurements required for this ( n min) can only be calculated with a known value of the general population of single results using the formula

.

The evaluation of measurement errors depends not only on the conditions of measurement, but also on the type of measurement (direct or indirect).

The division of measurements into direct and indirect is rather conditional. Later, under direct measurements we will understand measurements, the values ​​of which are taken directly from experimental data, for example, they are read from the scale of the device (a well-known example of direct measurement is temperature measurement with a thermometer). TO indirect measurements we will attribute those, the result of which is obtained on the basis of a known relationship between the desired value and the values ​​determined as a result of direct measurements. Wherein result indirect measurement received by calculation as function value , whose arguments are the results of direct measurements ( x 1 ,x 2 , …,x j,. …, x k).

It is necessary to know that the errors of indirect measurements are always greater than the errors of individual direct measurements.

Errors of indirect measurements are estimated according to the corresponding laws of error accumulation (with k2).

Law of Accumulation of Random Errors indirect measurements is as follows:


.

The law of accumulation of possible limiting absolute systematic errors indirect measurements is represented by the following dependencies:

;
.

The law of accumulation of possible limiting relative systematic errors indirect measurements has the following form:

;

.

In cases where the desired value ( y) is calculated as a function of the results of several independent direct measurements of the form
, the law of accumulation of limiting relative systematic errors of indirect measurements takes a simpler form:

;
.

Measurement errors and errors determine their accuracy, reproducibility and correctness.

Accuracy the higher, the smaller the measurement error.

Reproducibility measurement results improves with a decrease in random measurement errors.

Right of the measurement result increases with a decrease in the residual systematic measurement errors.

Learn more about the theory of measurement errors and their features yourself. I draw your attention to the fact that modern forms of presentation of the final results of measurements necessarily require the reduction of errors or measurement errors (secondary data). In this case, measurement errors and errors should be presented numbers which contain no more two significant digits .

1.2.10. Processing indirect measurements.

With indirect measurements, the desired value of the physical quantity Y found based on the results X 1 , X 2 , … X i , … X n, direct measurements of other physical quantities associated with the desired known functional dependence φ:

Y= φ( X 1 , X 2 , … X i , … X n). (1.43)

Assuming that X 1 , X 2 , … X i , … X n are the corrected results of direct measurements, and the methodological errors of indirect measurements can be neglected, the result of indirect measurements can be found directly by formula (1.43).

If Δ X 1 , Δ X 2 , … Δ X i , … Δ X n– errors in the results of direct measurements of quantities X 1 , X 2 , … X i , … X n, then the error Δ of the result Y indirect measurement in the linear approximation can be found by the formula

Δ = . (1.44)

term

(1.45)

is the error component of the indirect measurement result, caused by the error Δ X i result X i direct measurement - is called a partial error, and the approximate formula (1.44) - the law of accumulation of partial errors. (1K22)

To estimate the error Δ of the result of an indirect measurement, it is necessary to have some information about the errors Δ X 1 , Δ X 2 , … Δ X i , … Δ X n results of direct measurements.

Usually, the limit values ​​of the error components of direct measurements are known. For example, for the error Δ X i known: the limit of the basic error, the limits of additional errors, the limit of non-excluded residuals of the systematic error, etc. Error Δ X i is equal to the sum of these errors:

,

and the limit value of this error ΔX i,p - the sum of the limits:

. (1.46)

Then the limit value Δ p of the error of the result of indirect measurement P = 1 can be found by the formula

Δ p =
. (1.47)

Boundary value Δ g of the error of the result of indirect measurement for the confidence level P = 0.95 can be found using the approximate formula (1.41). Taking into account (1.44) and (1.46), we obtain:

. (1.48)

After calculating Δ p or Δ g, the result of indirect measurement should be written in standard form (respectively, (1.40) or (1.42)). (1P3)

QUESTIONS:

1. For what tasks are used measuring instruments? Which metrological characteristics Measuring equipment you know?

2. By what criteria are they classified metrological characteristics measuring instruments?

3. What component of the error of the measuring instrument is called basic?

4. What component of the error of the measuring instrument is called additional?

5. Define absolute, relative and reduced errors measuring instruments.

6. Define absolute error of the measuring transducer at the input and output.

7. How would you experimentally determine measuring transducer errors for input and output?

8. How interconnected absolute errors of the measuring transducer for input and output?

9. Define additive, multiplicative and non-linear error components of measuring equipment.

10. Why nonlinear component of the error of the measuring equipment sometimes called linearity error? For which transducer conversion functions it makes sense?

11. What information about the error of the measuring instrument does it give accuracy class?

12. Formulate the law of accumulation of partial errors.

13. Formulate error summation problem.

15. What is corrected value of the measurement result?

16. What is the purpose processing of measurement results?

17. How to calculate limit valueΔ p errors direct measurement result for the confidence level P= 1 and its limit valueΔ g for P = 0,95?

18. What measurement is called indirect? How find the result of an indirect measurement?

19. How to calculate limit valueΔ p errors indirect measurement result for the confidence level P= 1 and its limit valueΔ g for P = 0,95?

20. Give examples of methodological errors of direct and indirect measurements.

Control works on subsection 1.2 are given in (1KR1).

REFERENCES for section 1.

2. METHODS FOR MEASURING ELECTRIC QUANTITIES

2.1. Measurement of voltages and currents.

2.1.1. General information.

When choosing a means of measuring electrical voltages and currents, it is necessary, first of all, to take into account:

Kind of measured physical quantity (voltage or current);

The presence and nature of the dependence of the measured value on time in the observation interval (depends or not, the dependence is a periodic or non-periodic function, etc.);

The range of possible values ​​of the measured value;

Measured parameter (average value, effective value, maximum value in the observation interval, set of instantaneous values ​​in the observation interval, etc.);

Frequency range;

Required measurement accuracy;

The maximum observation time interval.

In addition, it is necessary to take into account the ranges of values ​​of the influencing quantities (ambient air temperature, supply voltage of the measuring instrument, output impedance of the signal source, electromagnetic interference, vibration, humidity, etc.), depending on the conditions of the measurement experiment.

The ranges of possible values ​​of voltages and currents are very wide. For example, currents can be of the order of 10 -16 A when measured in space and of the order of 10 5 A - in the circuits of powerful power plants. This section deals mainly with voltage and current measurements in the most common ranges in practice: from 10 -6 to 10 3 V and from 10 -6 to 10 4 A.

To measure voltages, analog (electromechanical and electronic) and digital voltmeters(2K1), DC and AC compensators (potentiometers), analog and digital oscilloscopes and measuring systems.

For measuring currents, electromechanical ammeters(2K2), and multimeters and measuring systems in which the measured current is first converted into a voltage proportional to it. In addition, an indirect method is used to experimentally determine currents, by measuring the voltage caused by the passage of current through a resistor with a known resistance.

2.1.2. Measurement of constant voltages by electromechanical devices.

To create voltmeters use the following measuring mechanisms(2K3): magnetoelectric(2K4), electromagnetic(2K5), electrodynamic(2K6), ferrodynamic(2K7) And electrostatic(2K8).

In a magnetoelectric measuring mechanism, the torque is proportional to the current in the moving coil. To build a voltmeter in series with the coil winding, an additional resistance is included. The measured voltage applied to this series connection is proportional to the current in the winding; therefore, the scale of the instrument can be graduated in units of voltage. The direction of the torque depends on the direction of the current, so pay attention to the polarity of the voltage applied to the voltmeter.

Input impedance R the input of the magnetoelectric voltmeter depends on the final value U to measuring range and total deflection current I on - current in the coil winding, at which the arrow of the device deviates to the full scale (it will be set at the mark U To). It's obvious that

R in = U To / I By. (2.1)

In multi-limit instruments, the value is often normalized R in, and current I By. Knowing the voltage U k for the measurement range used in this experiment, the value R in can be calculated by formula (2.1). For example, for a voltmeter with U k = 100 V and I po = 1 mA R in = 10 5 ohms.

To build electromagnetic, electrodynamic and ferrodynamic voltmeters, a similar circuit is used, only the additional resistance is connected in series with the winding of the fixed coil of the electromagnetic measuring mechanism or with the windings of the moving and fixed coils of the electrodynamic or ferrodynamic measuring mechanisms previously connected in series. The total deflection currents for these measuring mechanisms are usually significantly higher than for the magnetoelectric, so the input resistances of voltmeters are less.

Electrostatic voltmeters use an electrostatic measuring mechanism. The measured voltage is applied between fixed and movable plates isolated from each other. The input resistance is determined by the insulation resistance (about 10 9 ohms).

The most common electromechanical voltmeters with accuracy classes of 0.2. 0.5, 1.0, 1.5 allow you to measure DC voltages in the range from 0.1 to 10 4 V. To measure large voltages (usually more than 10 3 V), use voltage dividers(2K9). To measure voltages less than 0.1 V, magnetoelectric galvanometers(2K10) and devices based on them (for example, photogalvanometric devices), but it is more expedient to use digital voltmeters.

2.1.3. Measurement of direct currents by electromechanical devices.

To create ammeters use the following measuring mechanisms(2K3): magnetoelectric(2K4), electromagnetic(2K5), electrodynamic(2K6) And ferrodynamic(2K7).

In the simplest single-limit ammeters, the measured current circuit consists of a moving coil winding (for a magnetoelectric measuring mechanism), a fixed coil winding (for an electromagnetic measuring mechanism), or moving and fixed coil windings connected in series (for electrodynamic and ferrodynamic measuring mechanisms). Thus, unlike voltmeter circuits, they do not have additional resistances.

Multi-limit ammeters are built on the basis of single-limit ones, using various techniques to reduce sensitivity. For example, passing the measured current through part of the coil winding or including the coil windings in parallel. Shunts are also used - resistors with relatively low resistances, connected in parallel with the windings.

The most common electromechanical ammeters with accuracy classes 0.2. 0.5, 1.0, 1.5 allow you to measure direct currents in the range from 10 -6 to 10 4 A. To measure currents less than 10 -6 A, you can use magnetoelectric galvanometers(2K10) and devices based on them (for example, photogalvanometric devices).

2.1.4. Measurement of alternating currents and voltages

electromechanical devices.

Electromechanical ammeters and voltmeters are used to measure the effective values ​​of periodic currents and voltages. To create them, electromagnetic, electrodynamic and ferrodynamic, as well as electrostatic (only for voltmeters) measuring mechanisms are used. In addition, electromechanical ammeters and voltmeters also include devices based on a magnetoelectric measuring mechanism with AC or voltage to DC converters (rectifiers and thermoelectric devices).

The measuring circuits of electromagnetic, electrodynamic and ferrodynamic ammeters and AC voltmeters practically do not differ from the circuits of similar DC devices. All these devices can be used to measure both direct and alternating currents and voltages.

The instantaneous value of the torque in these devices is determined by the square of the instantaneous value of the current in the coil windings, and the position of the pointer depends on the average value of the torque. Therefore, the device measures the effective (rms) value of the measured periodic current or voltage, regardless of the shape of the curve. The most common ammeters and voltmeters with accuracy classes of 0.2. 0.5, 1.0, 1.5 allow you to measure alternating currents from 10 -4 to 10 2 A and voltages from 0.1 to 600 V in the frequency range from 45 Hz to 5 kHz.

Electrostatic voltmeters can also be used to measure both constant and effective values ​​of alternating voltages, regardless of the shape of the curve, since the instantaneous value of the torque in these devices is determined by the square of the instantaneous value of the measured voltage. The most common voltmeters with accuracy classes 0.5, 1.0, 1.5 allow you to measure alternating voltages from 1 to 10 5 V in the frequency range from 20 Hz to 10 MHz.

Magnetoelectric ammeters and voltmeters designed for operation in DC circuits cannot measure the effective values ​​of alternating currents and voltages. Indeed, the instantaneous value of the torque in these devices is proportional to the instantaneous value of the current in the coil. With a sinusoidal current, the average value of the torque and, accordingly, the instrument reading is zero. If the current in the coil has a constant component, then the reading of the device is proportional to the average value of the current in the coil.

To create AC ammeters and voltmeters based on a magnetoelectric measuring mechanism, AC-to-DC converters based on semiconductor diodes or thermal converters are used. On fig. 2.1 shows one of the possible circuits of the ammeter of the rectifier system, and in fig. 2.2 - thermoelectric.

In the ammeter of the rectifier system, the measured current i(t) straightens and passes through the coil winding of the magnetoelectric measuring mechanism IM. The reading of the device is proportional to the average modulo for the period T current value:

. (2.2)

Meaning I cp is proportional to the effective value of the current, however, the proportionality factor depends on the type of function i(t). All devices of the rectifier system are calibrated in the effective values ​​​​of currents (or voltages) of a sinusoidal form and are not intended for measurements in circuits with currents of arbitrary shape.

In the ammeter of a thermoelectric system, the measured current i(t) passes through the heater of the thermal converter TP. When it is heated, thermo-EMF arises at the free ends of the thermocouple, causing a direct current through the coil winding of the magnetoelectric measuring mechanism of the IM. The value of this current depends non-linearly on the effective value I measured current i(t) and little depends on its shape and spectrum.

Voltmeter circuits of rectifier and thermoelectric systems differ from ammeter circuits by the presence of an additional resistance connected in series to the circuit of the measured current i(t) and acting as a converter of the measured voltage into current.

The most common ammeters and voltmeters of the rectifier system with accuracy classes 1.0 and 1.5 allow you to measure alternating currents from 10 -3 to 10 A and voltages from 1 to 600 V in the frequency range from 45 Hz to 10 kHz.

The most common thermoelectric system ammeters and voltmeters with accuracy classes 1.0 and 1.5 allow measuring alternating currents from 10 -4 to 10 2 A and voltages from 0.1 to 600 V in the frequency range from 1 Hz to 50 MHz.

Usually, devices of rectifier and thermoelectric systems are made multi-range and combined, which allows them to be used to measure both alternating and direct currents and voltages.

2.1.5. DC voltage measurement

Unlike electromechanical analog voltmeters(2K11) electronic voltmeters incorporate voltage amplifiers. The informative parameter of the measured voltage is converted in these devices into direct current in the coil winding of the magnetoelectric measuring mechanism (2K4), the scale of which is calibrated in units of voltage.

The electronic voltmeter amplifier must have a stable gain in a certain frequency range from some lower frequency f n to the top f V. If f n = 0, then such an amplifier is usually called DC amplifier, and if f n > 0 and the gain is zero at f = 0 – AC amplifier.

A simplified circuit of an electronic DC voltmeter consists of three main components: an input voltage divider (2K9), a DC amplifier connected to its output, and a magnetoelectric voltmeter. A high-resistance voltage divider and a DC amplifier provide a high input impedance of the electronic voltmeter (of the order of 1 MΩ). The division and gain factors can be discretely adjusted, which makes it possible to make multi-range voltmeters. Due to the high gain of electronic voltmeters, a higher sensitivity is provided compared to electromechanical ones.

A feature of DC electronic voltmeters is drift- slow changes in voltmeter readings at a constant measured voltage (1Q14), caused by changes in the parameters of the elements of the DC amplifier circuits. The drift of readings is most significant when measuring low voltages. Therefore, before starting measurements, it is necessary to use special adjusting elements to set the zero reading of the voltmeter with a shorted input.

If an alternating periodic voltage is applied to the voltmeter in question, then, due to the properties of the magnetoelectric measuring mechanism, it will measure the constant component of this voltage, unless the alternating component is too large and the voltmeter amplifier operates in a linear mode.

The most common analog electronic DC voltmeters allow you to measure voltages in the range from 10 -6 to 10 3 V. The values ​​​​of the limits of the basic reduced error depend on the measurement range and are usually ± (0.5 - 5.0)%.

2.1.6. Measurement of alternating voltages

analog electronic voltmeters.

Analog electronic voltmeters are mainly used to measure the effective values ​​of periodic voltages in a wide frequency range.

The main difference between the circuit of an electronic AC voltmeter and the circuit of a DC voltmeter considered above is due to the presence of an additional node in it - a converter of the informative parameter of AC voltage to DC. Such transducers are often referred to as "detectors".

There are detectors of amplitude, modulo average and effective voltage values. The constant voltage at the output of the first is proportional to the amplitude of the voltage at its input, the constant voltage at the output of the second is proportional to the modulo average value of the input voltage, and the third is the effective one.

Each of the three indicated groups of detectors can, in turn, be divided into two groups: detectors with an open entrance and detectors with a closed entrance. For detectors with an open input, the output voltage depends on the DC component of the input voltage, and for detectors with a closed input, it does not. Obviously, if the circuit of an electronic voltmeter has a detector with a closed input or an AC amplifier, then the readings of such a voltmeter do not depend on the constant component of the measured voltage. Such a voltmeter is advantageous to use in cases where only the variable component of the measured voltage carries useful information.

Simplified diagrams of amplitude detectors with open and closed inputs are shown in Figs. 2.3 and 2.4.


When applied to the input of an amplitude detector with an open voltage input u(t) = U m sinωt capacitor is charged to voltage U m, which turns off the diode. At the same time, a constant voltage is maintained at the output of the detector. U m. If you apply an arbitrary voltage to the input, then the capacitor will be charged to the maximum positive value of this voltage.

When applying to the input of an amplitude detector with a closed voltage input u(t) = U m sinωt the capacitor is also charged to voltage U m and the output voltage u(t) = U m + U m sinωt. If such a voltage or a current proportional to it is applied to the coil winding of a magnetoelectric measuring mechanism, then the instrument readings will depend on the constant component of this voltage, equal to U m (2K4). When voltage is applied to the input u(t) = U Wed + U m sinωt, Where U Wed– average voltage value u(t) , the capacitor is charged to a voltage U m + U Wed, and the output voltage is set u(t) = U m + U m sinωt, independent of U Wed .

Examples of modulo average and effective voltage detectors were considered in subsection 2.1.4 (Fig. 2.1 and 2.2, respectively).

Amplitude and modulo average detectors are simpler than RMS detectors, but voltmeters based on them can only be used to measure sinusoidal voltages. The fact is that their readings, depending on the type of detector, are proportional to the average modulo or amplitude values ​​of the measured voltage. Therefore, the considered analog electronic voltmeters can be calibrated in effective values ​​only for a certain form of the measured voltage. This is done for the most common - sinusoidal voltage.

The most common analog electronic voltmeters allow you to measure voltages from 10 -6 to 10 3 V in the frequency range from 10 to 10 9 Hz. The values ​​of the limits of the basic reduced error depend on the measurement range and the frequency of the measured voltage and are usually ± (0.5 - 5.0)%.

The method of measurement using electronic voltmeters differs from the method of using electromechanical voltmeters. This is due to the presence in them of electronic amplifiers with DC power supplies, usually operating from the AC mains.


If, however, terminal 6 is connected to the input terminal 1 of the voltmeter and, for example, the voltage is measured U 65 , then the measurement result will be distorted by the interference voltage, the value of which depends on the parameters of the equivalent circuits in Fig. 2.5 and 2.6.

With direct voltage measurement U 54 interference will distort the measurement result, regardless of how the voltmeter is connected. This can be avoided by indirect measurement by measuring the voltages U 64 and U 65 and calculated U 54 = U 64 - U 65 . However, the accuracy of such a measurement may not be high enough, especially if U 64 ≈ U 65 . (2K12)