INTRODUCTION TO CALIBRATIONCalibration is defined as theprocess of comparing the outcomes i.e. experimental values with the set ofknown standard values.Whycalibration of instruments necessary?Calibration is very necessarybecause every instrument gets some errorin it as the time passed or by continuous use over a longer period of time. Ifwe do not calibrate our instrument over a regular period of time then we mightget an error during the experiments andalso in industries where different types of instruments are used for checkingthe dimension of a finished product. This, therefore, results in false resultsin laboratories and can also cause the production of defective final productsin industries.
Hence results in big losses and also affect the reputation of the industry. Typesof calibrationThere are generally two types ofcalibration systems: -1) Internalcalibration 2) Externalcalibration Internal calibration: – Internal calibration is defined asthe process in which instrument is allowed to calibrate itself. In this type ofcalibration method, no manual input is needed from its user. There are different types of techniques that can be usedfor calibration but it depends on the price range and make. External calibration: – Externalcalibration is the type of calibration which is done manually by the user ofthe instrument. To calibrate theinstrument externally one should have set of known standard weights that areapproved by the government.
Exampleof external calibration:- Let us take the example that we have tocalibrate a micrometer. For the calibration of a micrometer, we have to get a set of standard weights that is gaugebocks in this particular case. With the help of micrometer,r,we will measure the thickness of different gauge blocks, the thickness of which is already known.
Then after performing two or three trials, we will calculate the precision andaccuracy that had been achieved by the comparing the outcomes of an experiment and standard values. If the error isless, then the instrument can be used further and if the error is large thenthere is the need of changing the instrument. Herebelow is given the typical example of how the calibration chart looks like . aswe can see that in the below table that three trials had been taken tocalibrate the micrometer.Thenprecision and accuracy are calculated.
After that error is calculated, then, at last, it is determined whether the deviveis ok or not ok. Fanshawe Machine Ltd. Calibration Report: Date: JAN 21, 2018 Starrett 0.00″ to 1.
00″ Cool Micrometer Serial#: 253 Published Accuracy: (+/- .001) or < 0.18% Name: JR Number Standard Observations Average Precision Accuracy (% Error) OK "O" or NOK "X" 1st Trial 2nd Trial 3rd Trial 1 0.000 0.003 0.
000 0.000 0.001 O 2 0.106 0.106 0.105 0.107 0.106 0.
000 0.00% O 3 0.212 0.213 0.212 0.212 0.212 -0.
001 0.16% O 4 0.318 0.317 0.
318 0.319 0.318 0.000 0.00% O 5 0.424 0.425 0.
423 0.425 0.424 -0.001 0.08% O 6 0.
530 0.540 0.534 0.527 0.534 -0.007 0.69% X 7 0.
636 0.639 0.638 0.
635 0.637 -0.002 0.21% X 8 0.742 0.744 0.
748 0.743 0.745 -0.004 0.40% X 9 0.848 0.849 0.
851 0.853 0.851 -0.004 0.35% X 10 0.954 0.959 0.956 0.
956 0.957 -0.003 0.
31% X Average of % Error 0.25% X Table 1.0: Example of a Calibration ReportTheprecision of a measurement is ameasure of the reproducibility of aset of measurements.Thedegree of precision or “reproducibility” is calculated bytaking the difference (subtract) between the accepted value and theexperimental value, then divide by the accepted value.
To determine if a valueis precise findthe average of your data, then subtract each measurement from it.Precision = (accepted – experimental) / accepted. Aplus or minus value says how precise a measurement is. Accuracy and PrecisionFigure 1.0: Combinations ofAccuracy and PrecisionThisclassic diagram illustrates what combinations of accuracy and precisionexist.
Theprecise measurements both exhibit tight grouping near someportion of the dartboard. Theaccurate measurements are near the center.Todetermine if a value is accurate to compareit to the accepted value. Asthese values can be anything, a concept called percent error has beendeveloped. The accuracy isa measure of the degree of closeness ofa measured or calculated value to its actual value. Thepercent error is the ratio of the error to the actual value multipliedby 100.
Tocalculate % Error, find the difference (subtract) between the accepted valueand the experimental value, then divide by the accepted value (don’t forget tothen multiply that by 100). Accuracy or % Error =((accepted – experimental) / accepted)*100StandardDeviation = (deviations* for all measurements added together) / number ofmeasurementsNote*: Deviation = (average – actual)REFERENCES:-1)http://web.cecs.
uk/blog/internal-or-external-calibration3)https://en.wikipedia.org/wiki/Calibration4)Accuracy and Precision. Web:https://www.sophia.org/tutorials/accuracy-and-precision–3.