Testing torque is pretty easy. You apply a weight that pulls the arm of the wrench at a specific distance from the centre of the axis. And then you read the scale manually to see what it says.
“What’s on the scale if I apply 1 kilogram?”
“What’s on the scale if I apply 2 kilograms?”
“What’s on the scale if I apply 3 kilograms?”
“What’s on the scale if I apply 4 kilograms?”
“What’s on the scale if I apply 5 kilograms?”
After a while, you get a table of numbers that you can use to establish a) how much the value on the scale deviates from the actual load b) if the tool is better or worse at certain parts of the range and c) if there has been an obvious change from the precious periodic calibration.
It’s possible that the tool is fit-for-purpose for the actual user case despite that it’s overall pretty crappy, but that’s besides the scope of this explanation.
For that testing location to be fit for purpose for the testing, you need to have a) a pretty sturdy rack for the test itself because you need to be reasonably certain that the test itself is adding as few as possible of the extra unwanted force directions that will make the test useless b) a verified, digital, level c) a verified set of weights d) knowledge about your local gravity (because that shit changes a hint on the decimals even within the same city) and e) a controlled climate (you want to be able to reproduce the same – within reason – circumstances again and again and again)
The weights are pretty essential in the whole thing, so you send them to an external institute annually or biannually or so. They, in essence, put them up one by one on a scale to find out if their weight is within an acceptable margin; for some users, it may be more than enough that their 1000g weight is ⨦1g. For others, the requirement may be ⨦0.01g
Their scale is *also* in a controlled climate. Only used for the purpose of verifying the weight of…weights. It’s reliability is verified with a *reference weight* every now and then (say, monthly?) and sometimes THAT is sent to another test institute for cooperative verification of both institutes. Occasionally, they lend in a national reference weight or perhaps an international reference weight so that they can compare to what other countries, on an international *treaty* level has agreed to be a certain weight.
So that’s how it works. You test everything with reference loads. And occasionally, you let someone else verify the reference loads, effectively borrowing the credibility of THEIR reference for your own calibrations. They, in turn, borrow the credibility of someone else’s reference load.
Remember how I said that a weight is rated? E.g 1000g ⨦0.1g?
What that says, basically, is that since the weight is not guaranteed to be better than one ten thousandth of it’s full weight, then you can never offer a better rating on a calibration with that weight than 0.01% of the tools full scale reading.
In reality, you also need to factor in the reliability of the digital scale, the reliability of the instrument that was used to establish the local gravity and so on and on and on. But that is kind of out of scope for the explanation.
But, the point I was trying to make is that all of the references have an established reliability, that they have inherited from the initial reference when the reference steps are taken into account.
If you can trace a weight to how it’s weight is established and within what fault tolerance it’s weight is established, you pretty much just have to make up your mind on if your reference has good enough tolerance for it’s purpose.
Testing torque is pretty easy. You apply a weight that pulls the arm of the wrench at a specific distance from the centre of the axis. And then you read the scale manually to see what it says.
“What’s on the scale if I apply 1 kilogram?”
“What’s on the scale if I apply 2 kilograms?”
“What’s on the scale if I apply 3 kilograms?”
“What’s on the scale if I apply 4 kilograms?”
“What’s on the scale if I apply 5 kilograms?”
After a while, you get a table of numbers that you can use to establish a) how much the value on the scale deviates from the actual load b) if the tool is better or worse at certain parts of the range and c) if there has been an obvious change from the precious periodic calibration.
It’s possible that the tool is fit-for-purpose for the actual user case despite that it’s overall pretty crappy, but that’s besides the scope of this explanation.
For that testing location to be fit for purpose for the testing, you need to have a) a pretty sturdy rack for the test itself because you need to be reasonably certain that the test itself is adding as few as possible of the extra unwanted force directions that will make the test useless b) a verified, digital, level c) a verified set of weights d) knowledge about your local gravity (because that shit changes a hint on the decimals even within the same city) and e) a controlled climate (you want to be able to reproduce the same – within reason – circumstances again and again and again)
The weights are pretty essential in the whole thing, so you send them to an external institute annually or biannually or so. They, in essence, put them up one by one on a scale to find out if their weight is within an acceptable margin; for some users, it may be more than enough that their 1000g weight is ⨦1g. For others, the requirement may be ⨦0.01g
Their scale is *also* in a controlled climate. Only used for the purpose of verifying the weight of…weights. It’s reliability is verified with a *reference weight* every now and then (say, monthly?) and sometimes THAT is sent to another test institute for cooperative verification of both institutes. Occasionally, they lend in a national reference weight or perhaps an international reference weight so that they can compare to what other countries, on an international *treaty* level has agreed to be a certain weight.
So that’s how it works. You test everything with reference loads. And occasionally, you let someone else verify the reference loads, effectively borrowing the credibility of THEIR reference for your own calibrations. They, in turn, borrow the credibility of someone else’s reference load.
Remember how I said that a weight is rated? E.g 1000g ⨦0.1g?
What that says, basically, is that since the weight is not guaranteed to be better than one ten thousandth of it’s full weight, then you can never offer a better rating on a calibration with that weight than 0.01% of the tools full scale reading.
In reality, you also need to factor in the reliability of the digital scale, the reliability of the instrument that was used to establish the local gravity and so on and on and on. But that is kind of out of scope for the explanation.
But, the point I was trying to make is that all of the references have an established reliability, that they have inherited from the initial reference when the reference steps are taken into account.
If you can trace a weight to how it’s weight is established and within what fault tolerance it’s weight is established, you pretty much just have to make up your mind on if your reference has good enough tolerance for it’s purpose.
Latest Answers