INTRODUCTION calibration: – Let us take the example that

                                                 INTRODUCTION TO CALIBRATION

Calibration is defined as the
process of comparing the outcomes i.e. experimental values with the set of
known standard values.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

calibration of instruments necessary?

Calibration is very necessary
because every instrument gets some error
in it as the time passed or by continuous use over a longer period of time. If
we do not calibrate our instrument over a regular period of time then we might
get an error during the experiments and
also in industries where different types of instruments are used for checking
the dimension of a finished product. This, therefore, results in false results
in laboratories and can also cause the production of defective final products
in industries. Hence results in big losses and also affect the reputation of the industry.


of calibration

There are generally two types of
calibration systems: –




Internal calibration: –

Internal calibration is defined as
the process in which instrument is allowed to calibrate itself. In this type of
calibration method, no manual input is needed from its user. There are different types of techniques that can be used
for calibration but it depends on the price range and make.


External calibration: –


calibration is the type of calibration which is done manually by the user of
the instrument. To calibrate the
instrument externally one should have set of known standard weights that are
approved by the government.



of external calibration:



 Let us take the example that we have to
calibrate a micrometer. For the calibration of a micrometer, we have to get a set of standard weights that is gauge
bocks in this particular case. With the help of micrometer,r,
we will measure the thickness of different gauge blocks, the thickness of which is already known. Then after performing two or three trials, we will calculate the precision and
accuracy that had been achieved by the comparing the outcomes of an experiment and standard values. If the error is
less, then the instrument can be used further and if the error is large then
there is the need of changing the instrument.



below is given the typical example of how the calibration chart looks like . as
we can see that in the below table that three trials had been taken to
calibrate the micrometer.

precision and accuracy are calculated.
After that error is calculated, then, at last, it is determined whether the devive
is ok or not ok.











Fanshawe Machine
Ltd. Calibration Report:                                                                 
Date:           JAN 21, 2018
Starrett 0.00″ to 1.00″ Cool Micrometer Serial#: 253   Published Accuracy: (+/- .001) or < 0.18%    Name: JR Number Standard Observations Average Precision Accuracy (% Error) OK "O" or NOK "X" 1st Trial 2nd Trial 3rd Trial 1 0.000 0.003 0.000 0.000 0.001     O 2 0.106 0.106 0.105 0.107 0.106 0.000 0.00% O 3 0.212 0.213 0.212 0.212 0.212 -0.001 0.16% O 4 0.318 0.317 0.318 0.319 0.318 0.000 0.00% O 5 0.424 0.425 0.423 0.425 0.424 -0.001 0.08% O 6 0.530 0.540 0.534 0.527 0.534 -0.007 0.69% X 7 0.636 0.639 0.638 0.635 0.637 -0.002 0.21% X 8 0.742 0.744 0.748 0.743 0.745 -0.004 0.40% X 9 0.848 0.849 0.851 0.853 0.851 -0.004 0.35% X 10 0.954 0.959 0.956 0.956 0.957 -0.003 0.31% X Average of % Error 0.25% X Table 1.0: Example of a Calibration Report The precision of a measurement is a measure of the reproducibility of a set of measurements. The degree of precision or "reproducibility" is calculated by taking the difference (subtract) between the accepted value and the experimental value, then divide by the accepted value. To determine if a value is precise find the average of your data, then subtract each measurement from it. Precision   =  (accepted - experimental) / accepted. A plus or minus value says how precise a measurement is.                      Accuracy and Precision Figure 1.0: Combinations of Accuracy and Precision This classic diagram illustrates what combinations of accuracy and precision exist.  The precise measurements both exhibit tight grouping near some portion of the dartboard.  The accurate measurements are near the center. To determine if a value is accurate to compare it to the accepted value.  As these values can be anything, a concept called percent error has been developed.  The accuracy is a measure of the degree of closeness of a measured or calculated value to its actual value. The percent error is the ratio of the error to the actual value multiplied by 100. To calculate % Error, find the difference (subtract) between the accepted value and the experimental value, then divide by the accepted value (don't forget to then multiply that by 100).                         Accuracy or % Error = ((accepted - experimental) / accepted)*100 Standard Deviation = (deviations* for all measurements added together) / number of measurements Note*: Deviation = (average - actual) REFERENCES:- 1) n.pdf 2) 3) 4) Accuracy and Precision. Web: