Axioms of metrology. Fundamentals of theory and measurement techniques. What statements underlie

In a real measuring process, due to the influence of random factors, there is always a scattering of random readings of one or different devices or scattering of random measured values ​​obtained as a result of the implementation of one technique or several measurement techniques (MI) of the same measured value. The purpose of any measurement is to find true meaning measured value - such a value that corresponds to the definition of the measured value (true value). From the formulated definition it should be clear under what conditions the quantity takes on a single unchanged value that corresponds to the purpose of the measurement.

It should be admitted that measured value (or instrument reading) is always a realization of a random variable at a particular moment in time , which is related to its true meaning only probabilistic dependence, and this axiom... So multiple measurements can be considered as a series of single measurements within a certain time interval, in each of which one reading of the device is recorded (or one measured value of a quantity when implementing a measurement procedure).

When constructing a theory of measurements, two general properties any measurements:

1) the uncertainty of the true value of the measured quantity (true value);

2) uncertainty mathematical expectation measured values ​​(expected value).

Based on these two properties of measurements, the basis metrology put two postulate:

1) the true value of the measured quantity exists, it is constant (at the time of measurement) and cannot be determined ;

2) the mathematical expectation of random measured values ​​of a quantity exists, it is constant and cannot be determined .

From these postulates it follows that the randomness of the measured value of the quantity generates uncertainty deviation of any average measured value of a quantity, as from its true meaning and from mathematical expectation measured values.

Allocate more two axioms metrology:

Without a measuring instrument that stores a unit of magnitude, measurement is impossible;

Without a priori information (about an object, standards, means and conditions of measurements), measurements are impossible.

As a consequence, two statements can be distinguished from these postulates:

investigation number 1- “there is a true deviation of the measured value of a quantity from its true value (true value of the correction) and it is impossible to determine it”;

investigation number 2- "transfer of a unit of value to a measuring instrument without an error is impossible."

In international documents on metrology, the word “ true"Is sometimes omitted and the term" value of quantity» . It is believed that the concepts " true measured value" and " measured value"Are equivalent.

In the monograph of Rabinovich S.G. the following postulates of metrology are proposed: “there is a true value of the measured quantity (1), it is the only one (2), it is a constant (3) and cannot be determined (4)”.

Measurements of physical quantities

Man, as an integral part of nature, learns the physical world around him mainly by measuring quantities. Theory of knowledge - epistemology refers to philosophy, where the categories of quality and quantity are considered, which are used above in the definition of the concept " magnitude».

Reliable initial information obtained by measuring quantities, parameters and indicators is the basis for any form of management, analysis, forecasting, planning, control and regulation. It is also important when studying natural resources, while monitoring their rational use, while protecting the environment and ensuring environmental safety.

Measurements play a huge role in modern society; in developed countries, up to 10% of social labor is spent on them.

Measurement called " process of experimentally obtaining one or more quantity values ​​that can reasonably be attributed to a measurand". Here the word " one»Should be considered as an exception when information on the error is generally known (default) and, for simplicity's sake, is not indicated as a result of measurements. Otherwise, only one specified measured value would be considered true.

Measurement is also called set of operations performed to determine the quantitative value of a quantity. This definition formulated in the Federal Law. Unfortunately, it provides freedom in the interpretation of the phrase “ quantitative value of a quantity»And does not exclude the presentation of only one measured quantity value.

Previously, the dimension was called the process of comparing a quantity with its value taken as a unit... In our opinion, this definition adequately reflects the essence of the measurement process. "Measurement is a refinement of the value of the measured quantity" is also noted in some sources.

There is a more general definition of the concept “ dimension» – obtaining on the numerical axis an abstract reflection of the real property of the measurement object in the conditions of physical reality in which it is located... This abstract reflection is a number (mathematical abstraction).

The measurement provides for the description of the quantity in accordance with the intended use of the measurement result, the measurement procedure and the measuring instrument operating in accordance with the regulated measurement procedure, as well as taking into account the measurement conditions.

The measurement is carried out on the basis of some phenomena of the material world called measuring principle... For example, the use of gravitational attraction when measuring the mass of objects, substances and materials by weighing.

To implement the measurement principle, it is used measurement methodreception or a set of methods for comparing a measured value with its unit or correlation with a scale... Distinguish between methods of direct assessment and comparison methods. Comparison methods, in turn, are divided into differential (null) method, substitution method and matching method.

Measured variable (measured value)quantity to be measured... This is a parameter (or functional of parameters) of the model of the measurement object, expressed in units of magnitude or in relative units, indicating the measurement conditions and accepted by the subject as being measured by definition. For example, the length of a steel bar is the shortest distance between its plane-parallel end surfaces at a temperature of (20 ± 1) o C.

Measurement object - a material object that is characterized by one or more measured quantities.

Thus, it is necessary to clearly distinguish between the concepts " magnitude" and " measured value», Which are significantly different in meaning and definition. Concept magnitude belongs to the philosophical category " general»And is formulated for a set of objects, as it were, for any measurements of a quantity. Concept measured value belongs to the category " private»And is formulated in relation to the selected model of a specific object or a set of objects of the same type for fixed measurement conditions.

Taking into account the imperfection of standards, working measuring instruments and the measuring process as a whole, the expression for the true value of the measured quantity In ist at a fixed moment in time can theoretically be represented in the form of an equation:

where In rev- SI indication (measured value of a quantity);

θ ist- the true value of the correction to the reading of the device in the working conditions of measurements (either with the sign "+" or with the sign "-").

Since the true value of the quantity is never known, the true value of the correction cannot be determined either (see Corollary No. 2 above). The expression means:

(2)

can be of practical value only in the mathematical modeling of the measuring process, when the true value of the quantity can be specified with an error determined only by the capabilities (capacity) of computing technology. The true value of the correction cannot be called the "error with the opposite sign", since it can never and in no way be used to describe the measuring process.

It is often necessary to bring the measured value of a quantity as close as possible to its true value. For this, the readings of the device storing the unit are corrected by introducing additive corrections determined under the following conditions:

1) normal- to clarify the unit of value previously transmitted to the device using a standard;

2) workers- to take into account changes in the readings of the device relative to the readings of the same SI in normal conditions.

First type of amendment (θ n) to the readings of the SI, which stores the unit, is evaluated during its calibration under normal conditions as the difference between the reference value ( In en) and indication (measured value of the quantity In rev. N) on
formula:

(3)

If, when measuring a constant value reproduced by the standard, there is a scatter in the readings, then a scatter of corrections is observed and the calculation of the average value of the correction is required.

The second type of correction θ p to the readings of the SI, which stores the unit, is evaluated during its calibration as the difference between the value ( In rev. N) measured in normal conditions, and the value ( In meas.r) measured in working conditions,

according to the formula:

(4)

If, at the same time, a scatter of the SI readings is also observed, then the correction is calculated from the average values ​​of the magnitude under normal and operating conditions.

To obtain the final measured value of the quantity, the correction of the first type and all obtained corrections of the second type must be added to the SI readings with their own signs.

Some time is spent on measurements, during which both the measured value itself and the measuring instrument can change. During this time, many random readings are recorded and the average value is taken as the measured value.

It can be argued that the real value is measured, and the measured value is assigned to the parameter of the object model... First, a quantity is selected to describe the property of an object and the standard of the unit of this quantity. Then the definition of the measured parameter of the model of this object is formulated and a technique for measuring this parameter is built on the basis of a single indication or an average over a set of readings of a measuring instrument.

The standard of the unit of quantity is not directly involved in the measurement process. It is believed that the SI used in the measurement process already stores the unit of value previously transmitted from the standard.

At present, on the basis of the theory of probability and mathematical statistics, two approaches are being formed to the construction of a general theory of measurements (to the mathematical description of a real measuring process):

1) based on uncertainty concepts;

2) based on error concepts.

Uncertainty concept

Since the true value is always unknown, then around the random measured value of the quantity an interval of possible true values ​​is predicted, each of which could reasonably be attributed to the measured quantity with a different probability... In practice, usually one single (for example, average) measured value is indicated, but together with it
give indicators reflecting the degree of uncertainty of the possible deviation of this measured value from the unknown true value
magnitudes.

The concept of measurement uncertainty is based on the ideas underlying the USSR state standard GOST 8.207-73, which is still in force today. It is built on a logical sequence: “ measurement uncertainty(as a general property) - Uncertainty Indicators - Assessment of these indicators».

The measurement uncertainty is due to two main reasons:

1) impossibility of counting an infinite number of readings (limited number of measured values);

2) limited knowledge about all the systematic effects of a real measuring process that affect the measured value of a quantity, including limited knowledge about the standard of the unit of magnitude and measurement conditions.

After the introduction of all known amendments there remains the uncertainty of the deviation of the most probable estimate of the measured quantity from its true value, expressed by the total indicator.

According to ISO definition “ measurement uncertainty is a parameter associated with a measurement result that characterizes the dispersion of quantity values ​​that could reasonably be attributed to a measurand"(1995).

As defined by ISO 2008 “ measurement uncertainty is non-negative parameter characterizing the dispersion of quantity values ​​attributed to a measurand based on measurement information» .

From these definitions it follows that the numerical parameter reflects the scattering of the values ​​of the quantity. This set of scattered meanings can only be expressed spacing on the number axis ... In practice, this interval has always been called error.

However, ISO proposes to characterize the measurement uncertainty by the following three indicators with the word “ uncertainty» :

1) standard uncertainty expressed as standard deviation (SD);

2) total standard uncertainty b;

3) extended uncertainty Is the product of the total standard uncertainty and the coverage factor, depending on the probability.

These indicators of uncertainty can be estimated by statistical methods (method A) and probabilistic methods (method B).

In the concept of uncertainty, the estimation of the result of the measurements performed separated from comparison measured value with some other known value, for example a reference value. It is considered that all possible corrections are estimated and introduced before the presentation of the measurement result, and their uncertainty indicators are also reasonably estimated.

V foreign countries to present the measurement result, mainly the three indicated indicators are used with the word "uncertainty", and the word " error"Is almost never used.

The disadvantages of the concept of uncertainty include the contradiction in the selected indicators, in which the word “ uncertainty", Which means something in principle indefinable ( uncomputable), but, nevertheless, it is proposed to define it.

Error concept

The concept of error is taken as the basis of Russian regulatory documents and is based on the concept “ measurement error", Which since 2015 is defined as" difference between a measured quantity value and a reference quantity value". Earlier in GOST 16273-70 it was defined as difference between measured quantity value and true quantity value, and in RMG 29-99 as deviation of the measurement result from the true (actual) value of the quantity... It can be seen that the word “ reference value"Became a substitute for the poorly chosen phrase" true (real) value". The error concept is based on a logical sequence: “ error - error characteristic - error model - error estimate».

The error is considered known if, for example, the reference value known during the SI calibration is taken as a reference. If a true value is taken as a reference, then the error is considered unknown (undetectable).

In this concept, an attempt is made with one term “ error»Combine two incompatible processes when a random measured value attributed to unknown measured value and when the same random measured value is compared with another famous value of the quantity. The ambiguity of the term " error", Which in different situations can correspond to both a known (definable) and an unknown (undefined) value, leads to the need every time clarify the meaning this concept in each specific situation. The contradiction that remains in the definition of the basic term does not in any way contribute to the clarity of understanding the essence of the measuring process.

Obviously, to describe and present the measurement result, the term “ measurement error»With the proposed definition can be used neither in the case when the error is unknown, nor in the case when it is already known, since you can always introduce a correction. Therefore, to represent the measurement result, a new term was needed - “ measurement error characteristic", That is, a characteristic of what is fundamentally indeterminable, but can only be estimated. As such a characteristic, for example, one often uses “ confidence limits - the interval in which the measurement error is located with a given probability", Which is close to the concept" expanded uncertainty"In the concept of uncertainty.

Since both considered scientific concepts reflect both phenomena - scatter and unknown difference between measured and true value of a quantity, then the corresponding terms “ random error" and " systematic error», Which are always present in measurements, it is advisable to give meaning to probabilistic indicators of measurement uncertainty.

Note also that the measurement result is the interval, the error is the same interval (this is indicated by the symbol “ ± »), Any correction together with its error is also an interval.

Theoretical metrology?

Physical quantity?

What is a unit of measurement

Measurement unit of a physical quantity is a physical quantity of a fixed size, which is conventionally assigned a numerical value equal to one, and is used to quantify physical quantities that are homogeneous with it. The units of some quantity may differ in size, for example, meter, foot and inch, being units of length, have different sizes: 1 foot = 0.3048 m, 1 inch = 0.0254 m.

What statements underlie

In theoretical metrology, three postulates (axioms) are adopted, which are guided at three stages of metrological work:

In preparation for measurements (postulate 1);

When taking measurements (postulate 2);

When processing measurement information (postulate 3).

Postulate 1: measurement is impossible without a priori information.

Postulate 2: Dimension is nothing more than comparison.

Postulate 3: The measurement result without rounding is random.

The first axiom of metrology: measurement is impossible without a priori information. The first axiom of metrology refers to the situation before the measurement and says that if we do not know anything about the property we are interested in, then we will not know anything. On the other hand, if everything is known about him, then the measurement is not needed. Thus, the measurement is due to a lack of quantitative information about a particular property of an object or phenomenon and is aimed at reducing it.

The presence of a priori information about any size is expressed in the fact that its value cannot be equally probable in the range from - ¥ to + ¥. This would mean that the prior entropy

and to obtain measurement information

for any posterior entropy H, an infinitely large amount of energy would be required.

Second axiom of metrology: measurement is nothing more than comparison. The second axiom of metrology refers to the measurement procedure and says that there is no other experimental way of obtaining information about any dimensions, except by comparing them with each other. Folk wisdom saying that "everything is cognized in comparison" echoes here with the interpretation of the measurement by L. Euler, given over 200 years ago: and indicating the relationship in which she is with her. "

The third axiom of metrology: the measurement result without rounding is random. The third axiom of metrology refers to the situation after the measurement and reflects the fact that the result of a real measurement procedure is always influenced by a variety of various, including random factors, the exact accounting of which is impossible in principle, and the final result is unpredictable. As a result, as practice shows, when repeated measurements of the same constant size, or when simultaneously measuring it by different persons, using different methods and means, unequal results are obtained, unless they are rounded (roughened). These are separate values ​​of a measurement result that is random in nature.

Measurement of physical quantities.

Measurement concept. Axioms of metrology underlying measurement. Measurement of a physical quantity

Measurement classification.

Measurement methods.

Measurement errors and reasons for their occurrence. Classification of measurement results errors. Summation of the components of the measurement error

Axioms of metrology.

1. Any measurement is a comparison.

2. Any measurement without a priori information is impossible.

3. The result of any measurement without rounding is a random value.

Measurement classification

Technical measurements- these are measurements carried out under specified conditions according to a specific method developed and studied in advance; as a rule, these include mass measurements carried out in all sectors of the national economy, with the exception of scientific research. In technical measurements, the error is assessed by the metrological characteristics of the measuring instrument, taking into account the applied measurement method.

Metrological measurements.

Verification measurements- these are measurements performed by metrological supervision services in order to determine the metrological characteristics of the measuring instrument. Such measurements include measurements during metrological certification of measuring instruments, expert measurements, etc.

Measurements with the highest possible accuracy, achieved at the current level of development of science and technology. Such measurements are carried out when creating standards and measuring physical constants. Estimation of errors and analysis of the sources of their occurrence are characteristic of such measurements.

By the method of obtaining the measurement:

  • Straight lines - when a physical quantity is directly related to its measure;

· Indirect - when the desired value of the measured quantity is established by the results of direct measurements of quantities that are associated with the desired quantity by a known relationship. For example, the resistance of a section of a circuit can be measured knowing the current and voltage in this section.


Aggregate measurements are measurements of several homogeneous quantities, at which the sought-for values ​​of quantities are found by solving a system of equations obtained by direct measurements and various combinations of these quantities.

An example of aggregate measurements is finding the resistances of two resistors from the results of measurements of the resistances of the series and parallel connections of these resistors.

The sought resistance values ​​are found from a system of two equations.

b)

Joint measurements are measurements taken simultaneously of two or more not of the same name to find the relationship between them

Joint - are produced in order to establish the relationship between values. With these measurements, several indicators are determined at once. A classic example of joint measurements is finding the dependence of the resistance of a resistor on temperature:

Where R 20- resistance of the resistor at t = 20 ° С; α, b - temperature coefficients.

To determine the values R 20,α, b first measure the resistance R t, resistor at, for example, three different temperatures (t 1, t 2, t 3), and then they make up a system of three equations, according to which the parameters are found R 20, a and b:


Joint and aggregate measurements by the methods of finding the desired values ​​of the measured quantities are close to each other, since the required values ​​are found by solving systems of equations. The difference is that with aggregate measurements, several values ​​of the same name are measured simultaneously, and with joint measurements, several

By the nature of the change in the measured value:

  • Static - associated with such quantities that do not change during the measurement time.
  • Dynamic - associated with such quantities that change during the measurement (ambient temperature).

By the number of measurements in a series:

  • One-time;
  • Multiple. The number of measurements is at least 3 (better - 4, at least);

In relation to basic units of measurement:

  • Absolute(use a direct measurement of one basic quantity and a physical constant).
  • Relative- based on the establishment of the ratio of the measurand used as a unit. This measured value depends on the unit used.
Multiple n ≠ 1

Measuring principle it is a set of interactions between SI and an object based on physical phenomena (see above).

Page 1

TOOLKIT

ELEMENTS OF GENERAL METROLOGY

MOSCOW 2007


Introduction ……………………………………………………………………

  1. Subject and tasks of metrology. Basic principles of approach to measurements …………………………………………………………

  2. Physical quantities ……………………………………………… ..

    1. The size of the physical quantity …………………………… ..

    2. Measuring conversion ……………………………

    3. Basic and derived quantities. Dimension……….

  3. General issues measurement theory …………………………………

    1. Measurement classification …………………………………

    2. Principles, methods and techniques of measurements ………………

    3. Measuring instruments ………………………………………

    4. Measurement conditions ………………………………………….

    5. Measurement errors ……………………………………

  4. Transfer of sizes of units of physical quantities ……………………

    1. Standards of physical quantities …………………………… ..

    2. Transfer of sizes of units of physical quantities ……….

  5. Errors of measuring instruments ……………………………………

    1. Metrological characteristics of measuring instruments ... ..

    2. Standardization of metrological characteristics of measuring instruments ……………………………………………………

    3. Accuracy classes of measuring instruments ………………………

    4. Methods for verifying measuring instruments …………………… ..
Bibliography…………………………………………………………………

3

The present tutorial intended for evening students studying the course “Metrology. Standardization. Certification ". The manual contains the main issues studied for the course "Metrology".

The manual includes 5 sections: “1. Subject and tasks of metrology. Basic principles of approach to measurements "," 2. Physical quantities "," 3. General questions of the theory of measurements "," 4. Transfer of sizes of units of physical quantities "and" 5. Errors of measuring instruments ". At the end of each section, there is a list of questions for the assimilation of the passed material, which will be included in the exam questions for the course.

The manual contains 25 pages, 1 picture.

1. Subject and tasks of metrology. Basic principles of approach to measurements

Measurements constantly accompany the practical activities of a person. Most often, physical quantities are measured: length, mass, time, etc. Measurements are necessary in the study of nature, since only through measurements can one find out the quantitative characteristics of the objects under study. We can say that this or that science becomes exact only when, thanks to measurements, it gets the opportunity to find exact quantitative relationships that express the laws of nature.

Measurement- it is finding the value of a physical quantity empirically with the help of special technical devices. When performing measurements, the measured value is always compared with another, similar to it and taken as a unit. In this case, the measured value is always evaluated in the form of a certain number of units adopted for it. This number is called the value of the physical quantity.

According to the definition of measurement in practical terms physical measurement process is a set of operations for the use of a technical means storing a unit of a physical quantity, and consists in comparing (explicitly or implicitly) a measured quantity with its unit. The purpose of these operations is to obtain the value of a physical quantity (or information about it) in the most convenient form for use.

So, in the simplest case, applying a ruler with divisions to any part, compare its size with the unit stored by the ruler, and, after counting, obtain the value of the quantity (length, height, thickness and other parameters of the part). Using a measuring device, for example, a micrometer, the size of the value converted into the movement of the pointer is compared with the unit stored by the scale of this device. In the measuring channel of the measuring system, a comparison with the stored unit is also performed, and often it occurs in a coded form.

The specified set of operations can be called a measurement if a number of conditions are created and implemented, namely:

The ability to highlight the measured value among other values;

Establishment of the unit necessary to measure the highlighted value;

Materialization (reproduction or storage) of an established unit by a technical means;

Preservation of the unit size unchanged (within the specified accuracy) for at least the period necessary for measurements.

Metrology deals with the theory and practice of measurements (this name comes from the Greek metron - measure and logos - teaching and can be translated as "teaching about measures"). Currently, the following definition of metrology is adopted in Russia:

Metrology- the science of measurements, methods and means of ensuring their unity and ways to achieve the required accuracy.

As you can see, in the definition of metrology, the concepts of "uniformity of measurements" and "accuracy of measurements" are used.

Unity of measurements- the state of measurements, in which their results are expressed in legal units and measurement errors do not go beyond the established boundaries with a given probability.

Accuracy of measurements- the quality of measurements, reflecting the proximity of their results to the true value of the measured quantity.

Note that in practice, the uniformity of measurements is not always ensured, in particular, it is not observed in the case of quantitative chemical analysis.

There are theoretical and applied metrology.

Theoretical metrology is engaged in the creation of the theoretical foundations of metrology. She solves the following tasks:

Creation and development of measurement theory and theoretical foundations of measurement technology;

Creation and improvement of the theoretical foundations for the construction of systems of units and standards;

Development of the theory of errors based on mathematical statistics and the theory of probability;

Development of general principles setting up and conducting a measuring experiment;

Development of theoretical foundations for newly emerging and non-standard developing types and areas of measurements, such as the measurement of ionizing radiation, non-equilibrium processes, measurements at the submicrolevel;

Creation of scientific foundations for quantitative assessment of parameters of objects and technological processes, development of scientifically based criteria for assessing the degree of reliability, durability and safety of products.

Applied metrology deals with the practical application in various fields of research results in the framework of theoretical metrology and the provisions of legal metrology. Its tasks are:

Creation and improvement of measurement methods;

Improving measurement accuracy;

Revision of the fundamental principles of creating standards;

Development of methods and means of transferring the size of a unit from a standard to working measuring instruments with a minimum loss of accuracy;

Providing full automation of all verification work;

Development and improvement of the National Services of standard reference data and reference materials of properties and composition of substances and materials.

In most countries, including Russia, measures to ensure the uniformity of measurements and the required accuracy are established by law. Legal metrological support is provided by legal metrology.

The result of the activity of legal metrology is various documents that are both mandatory (laws, state standards(GOSTs)) and advisory. Note here that the term "standard" in metrology only applies to documents and not to substances or products.

Often, one or another section of metrology is named according to the industry it serves, although this classification is not entirely strict. For example, (practical) metrology is called "medical metrology" in medicine, "chemical metrology" in chemistry, and so on. This book is mainly devoted to measurements in chemistry. The need to separate chemical metrology into a separate area is due to the fact that measurements in chemistry (chemical analysis) have significant features.

Chemical metrology is a branch of metrology dealing with measurements in chemistry, mainly in quantitative chemical analysis.

Like any exact science, metrology has its own fundamental principles. The following axioms are usually postulated as such principles.

Axiom 1. Measurement is impossible without a priori information.

This axiom refers to the situation before the measurement and says that we cannot get an estimate of the property of interest to us without knowing anything about it in advance. It follows from this that the need for measurement is caused by a deficit of quantitative information about the studied property of the object and the measurement is aimed at reducing this deficit (it is clear that if everything is known about this property, nothing needs to be measured).

Axiom 2. Measurement is nothing more than comparison.

This is a statement that the only way to get information about any sizes is to compare them with each other. A consequence of this axiom is the need to introduce standards of physical quantities and a system for transferring their size to exemplary and working measuring instruments.

Axiom 3. The measurement result without rounding is random.

This axiom refers to the situation after the measurement and reflects the fact that the measurement result always depends on many factors, including random ones, the exact accounting of which is impossible in principle. Hence it follows that to describe the measurement results in full, it is necessary to use the apparatus of mathematical statistics.


Security questions for section 1:

1. Give a definition of the concept of "measurement" and list the conditions for measuring a physical quantity?

2. List the goals and objectives of theoretical and applied metrology?

3. What are the fundamental principles of metrology?


2. Physical quantities
2.1 The size of a physical quantity
One of the fundamental concepts in physics, chemistry and metrology is the concept of "physical quantity".

Physical quantity- a property that is qualitatively common to many physical objects (physical systems, their states and processes occurring in them), but quantitatively individual for each object. Typical physical quantities are mass, time, temperature, etc. It is clear from the definition of a physical quantity that any physical quantity can manifest itself to a greater or lesser extent, i.e. has a quantitative characteristic.

One and the same property of a physical object can be expressed through different quantities. For example, the degree of heating of a body can be characterized by both the temperature and the average speed of movement of the molecules. For convenience and to ensure the uniformity of measurements for each property, one characteristic is selected, which is legalized by agreements and in the future only it is used.

In order to be able to establish differences in the quantitative content in each specific object of the property displayed by a physical quantity, the concept of the size of a physical quantity is introduced. In real life, instead of “size (mass, length, amount of matter)” they usually say simply “mass, length, amount of matter”.

2.2 Measuring conversion


Measuring conversion- such a transformation, in which a one-to-one correspondence is established between the sizes of two quantities, preserving for a certain set of sizes of the transformed quantity (called the transformation range) all relations and functions defined for it. So, when measuring the temperature in a certain interval (conversion range) using a thermocouple (transducer), it is converted into emf.

The conversion is done with transducer.

Linear transformation- such a measurement transformation, at which the result of the transformation R increases by ∆R if the converted value Q increases by Q; if the value Q increases by n∆Q,, then the transformation result R increases by n∆R(provided that all values ​​are in the conversion range).

To each size Q can be attributed to a positive real number q, which shows how many times a given quantity is greater than the size of a physical quantity | Q| taken as a unit. The value q called the numerical value of the quantity Q, and its quantitative expression in the form of a certain number of units adopted for it

the value of a physical quantity. Suppose the length (or just length) of the table is 1.2m (value), then 1.2 is a numeric value. Note that both the size and the value of the physical quantity, in contrast to the numerical value, do not depend on the choice of units.

Physical scale- a sequence of physical quantities of the same name of various sizes constructed in a certain way.
2.3 Basic and derived quantities. Dimension
Physical quantities are objectively interrelated. Relationships between physical quantities are generally expressed by equations of physical quantities. A group of quantities is distinguished (the number of which in each field of science is determined by the difference between the number of independent equations and the number of physical quantities included in them). These quantities are called base quantities, and the corresponding units are called base units. The question of which physical quantities and units to choose as basic cannot be solved theoretically. They are chosen for reasons of efficiency and expediency. In particular, quantities and units that can be reproduced with high accuracy are chosen as the main ones. All other quantities and their units are called derivatives; they are formed using basic quantities and units using equations of physical quantities.

The set of selected basic physical quantities is called the system of quantities, the set of units of the main quantities is called system of units of physical quantities.

The described principle of constructing systems of physical quantities and their units was proposed by Gauss in 1832.

In the course of the development of science and technology, several systems of physical quantities have appeared, differing from each other in basic units. Currently, the international system of units (abbreviated designation SI) is generally accepted, although still from practical considerations off-system units are also widely used, and in theoretical physics - the so-called natural systems of physical quantities. The main advantages of using a single SI system are:

Versatility;

Unification of units of measurement;

Convenience of practical use of units, in most cases lying near the middle of the range of actually measured values;

0000 - = - 090-0shsh (in most basic equations when using SI units, the coefficients are equal to 1);

Simplicity of studying the SI system (in particular, force and mass are distinguished in it).

A formalized reflection of the qualitative difference in physical quantities is their dimension. The standard notation for the dimension is dim. The dimension of the basic physical quantities is written in capital Latin letters corresponding to the designations of the quantities: dim l = L(length); dim m= M(weight); dim t = T(time) etc. The dimension of the remaining quantities is determined through the dimensions of the basic quantities according to the formula

dim Q = L α · M β · T γ ·…,

where L, M, N, ... are the dimensions of the basic quantities, α, β, γ , ... - indicators of dimension, which are numbers (0, whole or fractional), determined from the equations of physical quantities.

If all dimension indicators are equal to zero, then the quantity is called dimensionless. Dimensionless quantities are relative (the ratio of two quantities with the same dimensions) and logarithmic (the logarithm of a relative quantity). Thus, the relative humidity of air is a dimensionless relative quantity, and the optical density of solutions is a dimensionless logarithmic quantity.
Security questions for section 2:


  1. Give a definition to the concept of "physical quantity"?

  2. Basic and derived physical quantities: the main advantages of the SI system?

  3. Determination of the dimension of basic and derived physical quantities.

3. General questions of the theory of measurements

3.1. Measurement classification


Measurements can be classified in a variety of ways.

By the nature of the dependence of the measured value on time measurements can be static (the measured value is constant during the entire measurement period) and dynamic (the measured value changes over time).

Examples: static measurements - measuring length or mass solid, dynamic - measurement of temperature or pressure in a chemical reactor.

By way of getting results measurements are divided into direct, when the desired value of the measured quantity is found directly from the experimental data, and indirect, when the value of the quantity is found on the basis of the known relationship between this quantity and the quantities subjected to direct measurements.

In the case of simultaneous measurements of several quantities of the same name, they are called cumulative... In this case, the desired value is found by solving a system of equations obtained by direct measurements of various combinations of these quantities.

According to the conditions that determine the accuracy of measurements, highlight measurements highest possible accuracy achievable with the current state of the art; control and verification measurements- measurements carried out with the help of measuring instruments and according to methods that guarantee the error of the result with a given probability; technical measurements, in which the error of the result is determined by the error of the measuring instruments.

By way of expressing results measurements are divided into absolute ones, based on direct measurements of one or more physical quantities or on the use of -values ​​of physical constants; relative, when the ratio of a quantity to a quantity of the same name is measured, which plays the role of a unit or is taken as an initial one. The results of relative measurements are expressed either in fractions (dimensionless values) or as a percentage.

According to the characteristic of measurement accuracy consider equally accurate measurements - a series of measurements of a quantity made with the same measuring instruments in terms of accuracy and under the same conditions, for example, taking several weighed samples of a substance on the same analytical balance using the same weights under the same conditions , and unequal measurements - a series of measurements of any quantity, performed by measuring instruments of different accuracy and (or) in different conditions, for example, taking a sample of the same substance on scales of different sensitivity or at different temperatures.

By the number of measurements of the same quantity in a series of measurements the latter are subdivided into single and multiple. Single measurements are performed once, for example, the measurement of the moment in time by the clock or the temperature of the solution under conditions of its constancy. This is often enough in practice. In case of multiple measurements of the same size of a physical quantity, the result is obtained on the basis of several successive measurements, i.e. from a number of single measurements. The arithmetic mean of the sum of the results of individual measurements is usually taken as the result of multiple measurements. It is conventionally accepted to consider a measurement to be multiple if the number of individual measurements is greater than or equal to 4. In this case, the data of a series of measurements can be processed by methods of mathematical statistics.
3.2 Principles, methods and techniques of measurements
The basis for the implementation of any measurement is an interconnected triad: the principle, method and technique of measurement.

Measuring principle- a set of physical phenomena underlying the measurement. Examples: the phenomenon of absorption of monochromatic radiation underlies spectrophotometric and atomic absorption methods for measuring the concentration of a substance in a solution; The effect of gravity is the principle of weighing the mass of a substance.

Method of measurement- reception or a set of methods for comparing the measured physical quantity with its unit in accordance with the implemented measurement principle. The measurement method is determined by the design of the measuring instruments used. There are several basic measurement methods.

Measurement method by definition consists in measuring a quantity in accordance with the definition of its unit and is used, as a rule, when reproducing basic units. These are, for example, measurements performed when reproducing the unit of temperature (kelvin) according to its definition.

Comparison Method with Measure (Comparison Method) consists in comparing the measured value with the value reproduced by the measure. For example, comparing a mass with a known value underlies the measurement of mass on a balance beam balance.

Differential (differential) measurement method consists in comparing the measured value with a homogeneous value having a known value. In this case, the difference between the measured value and the value with a known value, which is actually measured, is small in comparison with these values ​​themselves. Examples: measurements carried out during the verification of measures of length by comparison with a reference standard on a comparator; spectrophotometric determination of high and low concentrations of substances in the analyzed solution, when the measured value - optical density - is the difference between the absolute optical densities of the analyzed and standard (zero) solutions.

Zero measurement method consists in the fact that the resulting effect of the influence of the measured value and the measure on the comparison device is brought to zero. This method is implemented in all devices, the principle of which is based on the measurement of electrical resistance using a bridge by means of its complete balancing. For example, this method is used in a gas chromatographic thermal conductivity detector (katharometer).

V contact measurement method the sensitive element of the device is brought into contact with the object of measurement. Example: measuring temperature with a mercury thermometer.

V non-contact measuring method the sensitive element of the device is not brought into contact with the object of measurement. Example: Measuring the temperature of a graphite cell using a pyrometer in an atomic absorption analysis.

Measurement Techniques- a set of operations and rules, the implementation of which ensures the receipt of results with a known error. Usually, the measurement procedure is regulated by the relevant regulatory and technical document, which sets out all the norms and rules in accordance with which measurements are made: requirements for the choice of measuring instruments, the procedure for preparing a measuring instrument for operation, requirements for measurement conditions, measurements with an indication of their number , sequences; processing of measurement results, including the calculation and introduction of corrections and ways of expressing errors ("unified techniques"). As will be shown below, most methods of quantitative chemical analysis do not meet this definition, but the term "measurement procedure" still applies to them.


3.3 Measuring instruments
Measuring instruments - technical devices intended for measurements, having normalized metrological characteristics, reproducing and (or) storing a unit of physical quantity, the size of which is assumed to be unchanged (within the specified error) for a known time interval. According to a number of criteria, the following measuring instruments are distinguished.

By appointment- metrological and workers. Metrological measuring instruments are designed to reproduce a unit of a physical quantity and (or) store it or transfer the size of a unit to working measuring instruments. With their help, the uniformity of measurements in the country is ensured. These include standards, exemplary measuring instruments, calibration installations, comparison means (comparators, etc.), standard samples.

Working measuring instruments are intended for measurements not related to the transfer of the size of a unit of a physical quantity to other measuring instruments. They allow you to measure real physical quantities and are the most numerous. These include measuring instruments used in scientific research(pH meters, spectrometers, spectrographs), when monitoring various parameters of products and technological processes (sensors, counters), etc.

By the level of standardization- standardized and non-standardized. Standardized measuring instruments are manufactured in accordance with the requirements of a state or industry standard. The technical characteristics of such instruments correspond to the characteristics of a similar type of measuring instruments obtained on the basis of state tests. Measuring instruments entered in the State Register of Measuring Instruments, as a rule, are standardized. An example of this type of means are pipettes, volumetric flasks, weights, standard titers (fixed channels), which are widely used in laboratory chemical practice.

Non-standardized measuring instruments are designed to perform a special measuring task. Such products are often unique, self-made. In order for the measurements carried out with their help to be reliable, they must be metrologically certified in advance.

In relation to the measured physical quantity- main and auxiliary. Fixed measuring instruments make measurements of that physical quantity, the value of which must be obtained within the framework of the assigned measuring task. Auxiliary measuring instruments measure that physical quantity, the influence of which on the main measuring instrument or the object of measurement must be taken into account in order to obtain the measurement results of the required accuracy.

By design- for measures, measuring instruments, measuring installations, measuring systems, measuring complexes.

Measure as a measuring instrument, it is intended to reproduce and (or) store a physical quantity of one or more specified dimensions, the values ​​of which are expressed in established units and are known with the required accuracy. The normal Weston element is a measure of the emf with a nominal value of 1V; quartz generator - a measure of the frequency of electrical vibrations; 6.02 · 10 23 - a measure of the number of any particles (atoms, ions, molecules), equal to one mole.

The measure acts as a carrier of a unit of physical quantity and serves as the basis for measurements. When comparing the size of the measured value with it, its value is obtained in the same units.

Measures are subdivided into single-valued, multi-valued, sets of measures, stores of measures, setting. A measure that reproduces a physical quantity of one size - unequivocal measure(weight of a certain mass, capacitor of constant capacity, normal Weston element, caliber). A measure that reproduces a physical quantity of different sizes - multiple-valued measure(variable capacitor, cuvettes for spectrophotometric measurements with inserts). A set of measures of different sizes of the same physical quantity, necessary for practical application, both individually and in various combinations, is a set of measures (a set of weights, calibers, etc.).

Measuring device - a measuring instrument designed to obtain the values ​​of the measured physical quantity in the specified range. Such a device has a device for converting a measured value into a signal of measuring information and displaying it in an intelligible form. In many cases, the indicating device has a scale with an arrow or other device, a chart with a pen or a digital pointer, with which you can read or register the values ​​of a physical quantity. If the instrument is paired with a computer, the readout can be taken from the display or printout.

By the nature of the indication of the measured value measuring devices are divided into indicating and recording. The first ones only allow reading the values ​​of the measured value, and the second ones also register them. An example of indicating instruments is a micrometer, an analog or digital voltmeter, a clock. The registration of readings can be carried out in analog or numerical form. There are devices that allow you to register several values ​​of one or more quantities at the same time.

By action measuring devices are divided into integrating and summing devices. With the help of integrating measuring devices, the value of the measured quantity is determined by integrating it over another quantity (electric energy meter, counter of the distance traveled). Summing measuring devices give readings that are functionally related to the sum of two or more values ​​supplied through various measuring channels (watt-meter for measuring the total power of several electrical generators);

Measuring transducers - measuring instruments used to generate a signal of measuring information in a form convenient for transmission, further transformation, processing and storage, but not amenable to direct perception by the observer. These are structurally separate elements, they usually do not have an independent meaning for carrying out measurements. Usually they are components of more complex measuring complexes and systems of automatic monitoring, control and regulation.

Measuring systems - a set of functionally combined measures, measuring instruments, measuring transducers, computers and others technical means located at different points of the controlled space (environment, object, etc.) in order to measure one or more physical quantities inherent in this space (environment, object, etc.). Depending on the purpose, they are divided into measuring Information Systems (IIS), measuring control systems(X), measuring control systems(IMS), etc. The first of these systems presents the measurement information in the form required by the consumer. The second one is designed for continuous monitoring of the parameters of a technological process, a phenomenon, a moving object or its state. I&C provides automatic control of a technological process, production, a moving object, etc. This system contains elements for comparing the parameters of the measuring information with the normative ones, as well as feedback elements that make it possible to bring the parameters of the process or moving object to be controlled to the nominal values. Depending on the number of measuring channels, measuring systems can be one-, two-, three- or more channel. If the system has automatic means for receiving and processing measurement information, then it is called an automatic measurement system. A system that is reconfigured depending on the purpose of the measuring task is called a flexible measuring system.

Measuring complexes - a functionally combined set of measuring instruments and auxiliary devices, designed to perform a specific measuring task as part of an IMS. Example: measuring systems for assessing the quality of manufactured integrated circuits.

By level of automation- non-automatic measuring instruments, automated measuring instruments, automatic measuring instruments. Non-automatic measuring instrument does not have devices for automatic measurement and processing of their results (tape measure, theodolite, pyrometer, indicator paper). Automated measuring instrument performs one or several measuring operations in automatic mode. Automatic measuring instrument performs automatic measurements and all operations related to receiving and processing measurement results, registering them, transmitting data or generating a control signal.
3.4 Measurement conditions
The measurements are carried out under conditions under which all the values ​​of the influencing quantities are maintained within the limits of their nominal values. Such conditions are called normal... They are established in normative and technical documents for measuring instruments of a specific type or during their verification. In most measurements, the normal temperature value is normalized (in some cases it is 20 ° C, or 293 K, in others - 23 ° C, or 296 K). The basic error of the measuring instrument is usually calculated for the normal value, to which the results of many measurements performed under different conditions are given.

The range of values ​​of the influencing quantity, within which the change in the measurement result under its influence can be neglected in accordance with the established accuracy standards, is called normal range of values ​​of the influencing quantity (normal area).

The range of values ​​of the influencing quantity, within which the additional error or change in the readings of the measuring instrument is normalized, is called working area of ​​influencing quantity values ​​(working area).

The measurement conditions in which the measured and influencing quantities take on extreme values ​​and which the measuring instrument can still withstand without destruction and deterioration of its metrological characteristics are called extreme conditions of measurement.


3.5 Measurement errors


One of the main metrological characteristics of the measurement results is the error.

Measurement error - deviation of measurement results from the true value of the measured value. The error arises from the imperfection of the measurement process.

The specific causes and nature of the manifestation of errors are very diverse. Accordingly, they are classified according to many criteria.

By way of expression- absolute and relative errors.

Absolute measurement error - measurement error, expressed in units of the measured value. Relative measurement error - the ratio of the absolute measurement error to the true value of the measured value.

By the nature of the manifestation- systematic and random errors.

Systematic measurement error - the component of the measurement error, which remains constant or regularly changes with repeated measurements of the same physical quantity. Depending on the nature of the change, systematic errors are divided into constant, proportional and errors that vary according to a complex law.

Permanent errors retain their value for a long time, in particular, during the entire period of measurements. They are the most common. A good example of this kind of bias is a constant, nonzero blank value.

Proportional errors change proportionally to the measured value.

Periodic errors are a periodic function of time or a function of moving the pointer of a measuring device.

Errors varying according to a complex law, are the result of the combined action of several systematic errors.

Depending on the reasons for the occurrence, systematic errors are subdivided into instrumental errors, errors of the measurement method, subjective errors, errors due to non-compliance with the established measurement conditions.

Instrumental (hardware) errors measurements due to errors of the used measuring instrument. They arise due to wear of parts and the device as a whole, excessive friction in the device mechanism, inaccurate stroking during calibration, due to a discrepancy between the actual and nominal values ​​of the measure, etc. In recent years, this type of error has also begun to include the random component of the error inherent in the measuring instrument.

Measurement method errors (theoretical) due to the imperfection of the adopted measurement method. They are the result of simplified concepts of the phenomena and effects that underlie the measurements.

Subjective measurement errors(personal, personal difference) are caused by the individual characteristics of the operator.

Measurement errors due to changes in measurement conditions arise due to the unaccounted or insufficiently taken into account the effect of one or another influencing quantity (temperature, pressure, air humidity, magnetic field strength, vibration, etc.), incorrect installation of measuring instruments and other factors associated with the measurement conditions.

Random measurement error - the component of the measurement error, which changes randomly (in sign and value) during repeated measurements of the same quantity. Random errors are inevitable and irreparable and are always present in the measurement results. They cause a scattering of the numerical values ​​of the measured quantity (their difference in the last significant digits) with repeated and sufficiently accurate measurement under constant conditions.

According to the conditions of measurement of the measured value- static and dynamic ... Static measurement errors meet the conditions of static measurements, dynamic - conditions of dynamic measurements. Depending on the measurement conditions, basic and additional errors are also considered.

In addition, they allocate gross measurement error- an error significantly exceeding the one expected under the given measurement conditions.


Security questions for section 3:

1. List the ways of classifying measurements?

2. List the types of measurement methods and give a brief description of each of them?

3. Classification of measuring instruments.

4. Classification of measures.

5. Classification of measuring devices.

6. Absolute and relative error.

7 Types of systematic errors.

4. Transfer of sizes of units of physical quantities
4.1 Standards of physical quantities
To ensure the uniformity of measurements, a necessary condition is the identity of the units in which all measuring instruments of the same physical quantity are calibrated. This is achieved by accurately reproducing and storing the established units of physical quantities and transferring their sizes to measuring instruments by means of standards and exemplary measuring instruments.

Standard unit of magnitude- a measuring instrument designed to reproduce and store a unit of magnitude (or multiple or sub-multiple values ​​of a unit of magnitude) in order to transfer its size to other measuring instruments of a given value. Standards of units recognized by a decision of an authorized state body as reference in the territory Russian Federation, are called national measurement standards. If the standard reproduces a unit of physical quantity with the highest accuracy in the country, it is called primary. As a rule, national measurement standards are primary. Primary standards of basic units reproduce these units in accordance with their definition (on the other hand, how a unit of a physical quantity is determined is, to one degree or another, due to the structure of the primary standard).
4.2 Transfer of sizes of units of physical quantities
The transmission is carried out by means of exemplary measuring instruments.

Exemplary measuring instruments are measures, measuring devices or measuring transducers intended for verification and calibration of other measuring instruments using them. A certificate is issued for an exemplary measuring instrument approved in accordance with the established procedure, which indicates its metrological parameters and category according to the national verification scheme. Exemplary measuring instruments are stored and used by the bodies of the State Metrological Service, as well as by the bodies of departmental metrological services.

Rice. 1 - Scheme of transferring the size from the primary standard to exemplary and working measuring instruments
In its most general form, the metrological chain of transmission of sizes of units of physical quantities is shown in Fig. 1. Presented in fig. 1 scheme has a strict hierarchy: the transfer of dimensions between the standards goes from top to bottom: from the primary standard to the workers, from the workers to the exemplary measures and measuring instruments of the 1st category, etc. the underlying exemplary measuring instruments are verified by those located one step higher. Working measures and measuring instruments are verified against exemplary ones that have the appropriate accuracy. All exemplary measuring instruments are subject to mandatory verification within the time frame established by the rules of the Federal Agency for Technical Regulation and Metrology (Rosregulation).

Shown in fig. 1 metrological chain of transmission of dimensions is used only for a few physical quantities. In other cases, the number of steps in the hierarchy may be significantly less. This number and the order of transferring the size for each specific physical quantity are recorded in the verification diagrams.


Security questions for section 4:

  1. Give a definition to the concept of "standard unit of magnitude"?

  2. Metrological chain of transmission of sizes of units of physical quantities.

5 Errors of measuring instruments
5.1 Metrological characteristics of measuring instruments
Metrological characteristics measuring instruments call them specifications influencing the results and measurement errors. For each measuring instrument, the complex of these characteristics is selected and standardized in such a way that with their help it would be possible to estimate the measurement error.

The main metrological characteristics of measuring instruments are as follows:

- Static conversion characteristic (conversion function, calibration characteristic) is a dependence of the form y = f (x) output signal at from input signal X... This characteristic is set (normalized) in the form of an equation, graph or table and is officially attributed to a given measuring instrument in the entire range of intentions. The value f '(x)= dy / dx are called sensitivity conversion characteristics... It is often said about the sensitivity of a measuring instrument, a measurement technique, etc., implying the sensitivity of the corresponding static conversion characteristic. The static characteristic of the transformation of the form y = Kx are called linear, in this case the sensitivity is TO.

- Value of division (for dial instruments) - the change in the measured value, which corresponds to the movement of the pointer by one division of the scale. For digital devices, the role of the division value is played by the unit price of the least significant digit of the number in the reading of the device. In the case when the sensitivity is constant at each point of the measurement range, the scale is called uniform.

Measuring instrument error there is an error in the results obtained using this measuring instrument. This is the most important characteristic of a measuring instrument. In accordance with the definitions given in Sec. 1.2, distinguish absolute and relative errors that can be written as follows.

The absolute error ∆ for a measure is the difference between its nominal x n and real x D values

∆ = x n - x D.
The absolute error ∆ for a measuring device is the difference between its reading x n and the actual value of the measured value x d

∆ = x P - x D.

The relative error δ of the measuring instrument is the ratio of the absolute error ∆x to the actual value, usually it is expressed as a percentage:

δ = ( ∆x/ x d) 100 (%).

Since it is almost always δ ‹‹ 1, we assume x n ≈ x d:

δ ≈ (( ∆x/ x P 100 (%).


The errors of measuring instruments, as well as the errors of measurements, are divided by static and dynamic(we are only talking about static errors here), systematic and random... Unlike random errors, systematic errors are a function of the measured value and time. In addition, when analyzing the errors of measuring instruments (error components), one conditionally distinguishes proportional(measured value) and permanent(independent of the measured value) errors.
5.2 Standardization of metrological characteristics of measuring instruments
Rationing - establishing the limits of permissibility of deviations of real metrological characteristics of measuring instruments from their nominal values. The norms are set by the appropriate standards... The real metrological characteristics of measuring instruments are determined during their manufacture, as well as during verification, and in case of unsatisfactory at least one of them, the measuring instrument is adjusted or withdrawn.

Note that both the metrological characteristics of measuring instruments and the conditions in which they are operated are standardized ( conditions of use), for example temperature or pressure of atmospheric air. At the same time, there are normal the conditions of use (the range in which the influence of changing operating conditions on the process and measurement results can be neglected) and work area, in which changes in operating conditions affect the measurement results, but these influences are normalized.

The total error of the measuring instrument ∆ sums under normal conditions is called basic error and is normalized by setting the limit ∆ d. Most often, the systematic ∆c and random components of the error are normalized separately.

5.3 Accuracy classes of measuring instruments


Accuracy class- a generalized characteristic of a measuring instrument, determined by the limits of permissible basic and additional errors, as well as other properties of the measuring instrument that affect the accuracy of measurements carried out with their help. Accuracy classes of measuring instruments are established for measuring instruments for which:

Systematic and random errors are not standardized separately;

The dynamic error is negligible.

The method of designating the accuracy class of the measuring instrument is determined by the method of setting the limits of the permissible basic error. Usually, a reduced or relative error is used for this.


5.4 Methods of verification of measuring instruments
Verification of measuring instruments- a set of operations performed by the bodies of the state metrological service (other authorized bodies, organizations) in order to determine and confirm the compliance of the measuring instrument with the established technical requirements. There are several methods of verification, which differ for standards and measuring instruments.

The following methods are used to verify the measures:

Comparison with a more accurate than the calibrated, exemplary measure using a comparator;

Measurement of the quantity reproduced by the calibrated standard by measuring instruments of the corresponding category and class ("calibration of measures");

Calibration, which consists in comparing one measure from a set (or one of the marks of a multi-valued measure scale) with a more accurate measure. In this case, the dimensions of other measures of the set to be verified (values ​​of the reproducible value at other marks of the scale) are determined by comparing them in various combinations on comparison devices and processing the results obtained.

Gauges can be verified in two ways:

By measuring with their help the value reproduced by reference measures of the corresponding category or accuracy class. Usually, in this case, the values ​​of the measured quantity are chosen equal to the corresponding marks of the instrument scale, and the basic error is equal to the largest difference between the measurement result and the size of the measure. Typical example: verification of a balance by weighing a reference weight (standard);

Measurement of the same quantity (s) by the calibrated and exemplary instruments, and the error of the calibrated instrument is determined by the difference between the readings of the calibrated and the exemplary instruments. Example: Verifying a thermometer with a reference thermometer by measuring the temperature of the same object, such as water in a thermostat.

The most important point, both in verification and in the construction of transmission chains for the dimensions of units of a physical quantity, is the choice of the ratio of errors of the exemplary and verified measuring instruments. This selection is made in accordance with the fundamental principle of neglecting small errors. Usually, the error ratio is chosen equal to 1: 3 - 1: 5, but sometimes (taking into account the specific features of the verification procedure and the requirements for it), other ratios are also used.
Security questions for section 5:


  1. List the main metrological characteristics of measuring instruments?

  2. Absolute and relative error of measuring instruments.

  3. Standardization of metrological characteristics of measuring instruments.

  4. SI accuracy classes.

  5. List the main methods of verification of measures and measuring instruments?

Bibliography


  1. Dvorkin V.I. Metrology and quality assurance of quantitative chemical analysis M .: Chemistry. 2001 .-- 263 p.

  2. Law of the Russian Federation "On ensuring the uniformity of measurements"

  3. Burdun G.D., Markov B.N. Fundamentals of Metrology. M .: Publishing house of standards. 1985 - 256 p.

  4. RMG 29-99 " State system ensuring the uniformity of measurements. Metrology. Basic terms and definitions "

  5. GOST 8.563-96 "GSOEI. Measurement Techniques "

  6. GOST 8.061-80 “GSI. Verification diagrams. Content and construction ".


Page 1

Like any other science, measurement theory(metrology) is built on the basis of a number of fundamental postulates that describe its initial axioms.

The first postulate of the theory of measurements is an postulate A:within the accepted model of the object of research, there is a certain physical quantity and its true value.

If we assume that the part is a cylinder (model is a cylinder), then it has a diameter that can be measured. If the part cannot be considered cylindrical, for example, its cross-section is an ellipse, then it is meaningless to measure its diameter, since the measured value does not carry useful information about the part. And, therefore, in the framework of the new model, the diameter does not exist. The measured quantity exists only within the framework of the adopted model, that is, it makes sense only as long as the model is recognized as adequate to the object. Since different models can be compared to a given object for different research purposes, then from the postulate A follows

consequence A 1 : for a given physical quantity of the object of measurement, there are many measured quantities (and, accordingly, their true values).

From the first postulate of the theory of measurements it follows that the measured property of the measurement object must correspond to a certain parameter of its model. This model during the time required for measurement, it should allow this parameter to be considered unchanged. Otherwise, measurements cannot be taken.

This fact is described postulate B:the true value of the measured value is constant.

Having selected the constant parameter of the model, one can proceed to the measurement of the corresponding quantity. For a variable physical quantity, it is necessary to select or select some constant parameter and measure it. In the general case, such a constant parameter is introduced using some functional. An example of such constant parameters of time-varying signals introduced by means of functionals are rms or rms values. This aspect is reflected in

Corollary B1:to measure a variable physical quantity, it is necessary to determine its constant parameter - the measured quantity.

When constructing a mathematical model of a measurement object, one inevitably has to idealize one or another of its properties.

A model can never fully describe all the properties of a measurement object. It reflects, with a certain degree of approximation, some of them that are essential for solving a given measurement problem. The model is built before measurement based on a priori information about the object and taking into account the purpose of the measurement.

The measured quantity is defined as a parameter of the adopted model, and its value, which could be obtained as a result of an absolutely accurate measurement, is taken as the true value of this measured quantity. This inevitable idealization, adopted when constructing a model of the measurement object, determines

an inevitable discrepancy between the parameter of the model and the real property of the object, which is called the threshold.

The fundamental nature of the concept of "threshold nonconformity" is established postulate C:there is a discrepancy between the measured value and the investigated property of the object (threshold discrepancy of the measured value) .

The threshold mismatch fundamentally limits the attainable measurement accuracy with the accepted definition of the measured physical quantity.

Changes and clarifications of the measurement purpose, including those that require an increase in the measurement accuracy, lead to the need to change or refine the model of the measurement object and redefine the concept of the measured quantity. The main reason for the redefinition is that the threshold mismatch of the previously adopted definition does not allow increasing the measurement accuracy to the required level. The newly introduced measured parameter of the model can also be measured only with an error that is at best

case is equal to the error due to the threshold mismatch. Since it is fundamentally impossible to build an absolutely adequate model of the measurement object, it is impossible

eliminate the threshold discrepancy between the measured physical quantity and the parameter of the model of the measurement object that describes it.

This implies an important Corollary C1:the true value of the measured quantity cannot be found.

The model can be built only if there is a priori information about the measurement object. In this case, the more information, the more adequate the model will be and, accordingly, its parameter describing the measured physical quantity will be more accurate and correct. Therefore, increasing a priori information decreases the threshold mismatch.

This situation is reflected in consequenceWITH2: the achievable measurement accuracy is determined by a priori information about the measurement object.

It follows from this corollary that in the absence of a priori information, measurement is fundamentally impossible. At the same time, the maximum possible a priori information consists in a known estimate of the measured quantity, the accuracy of which is equal to the required one. In this case, there is no need for measurement.

Share with friends or save for yourself:

Loading...