Hi, Garth:
Garth posted:
"It's easy for math enthusiasts to get carried away with precision if they don't have a feel for what it means in the physical world. Although a few functions will make unacceptable error levels accumulate, most calculators have far, far more precision than most real-life situations have any use for."
I beg to dissent. Your arguments apply only to entering measurements of physical magnitudes as inputs to some computing process, and even then there are plenty of exceptions where high-precision measurements are both possible and required, specially in nuclear physics, astronomy, etc.
But your arguments do not apply at all (and are in fact misleading) to the computational processes applied to said inputs, where each and every digit counts, and counts a lot if the resulting outputs are to be meaningful and relevant. There, when subjecting your physical measurements to some complex algorithm, you can't afford to limit the intermediate accuracy to be the same as that of the physical inputs, lest your results will be pure garbage. On the contrary, you need much higher precision, which usually increases with the size of the problem, notwithstanding the initial accuracy of the physical inputs.
For instance, many architectural and electrical engineering real-world applications require solving large systems of linear equations so in real professional life you'll frequently find yourself working with large matrices which tend to be numerically ill-conditioned more often than not. In these cases, you'll need as much precision to process them as you can get, even if your inputs are measured to just one decimal, that is if you want your results to be accurate to at least one decimal, as the inputs.
Perhaps this will require internally using 10 digits for medium matrices, or 20 digits for large matrices. Your results will still be accurate to one decimal, as the inputs, but unless you use that much higher internal accuracy throughout the whole solving process, you'll get no usable results, at all. That's why high accuracy is needed and that's why your arguments are "shortsighted", so to speak.
If in doubt, you may want to consider this example I've set up, where the solution of some engineering problem requires solving this small 7x7 system of linear equations, where the coefficients are the result of some measurement, say Volts, with just one decimal of precision:
1.3 x1 + 7.2 x2 + 5.7 x3 + 9.4 x4 + 9.0 x5 + 9.2 x6 + 3.5 x7 = 45.3
4.0 x1 + 9.3 x2 + 9.0 x3 + 9.9 x4 + 0.1 x5 + 9.5 x6 + 6.6 x7 = 48.4
4.8 x1 + 9.1 x2 + 7.1 x3 + 4.8 x4 + 9.3 x5 + 3.2 x6 + 6.7 x7 = 45.0
0.7 x1 + 9.3 x2 + 2.9 x3 + 0.2 x4 + 2.4 x5 + 2.4 x6 + 0.7 x7 = 18.6
4.1 x1 + 8.4 x2 + 4.4 x3 + 4.0 x4 + 8.2 x5 + 2.7 x6 + 4.9 x7 = 36.7
0.3 x1 + 7.2 x2 + 0.6 x3 + 3.3 x4 + 9.7 x5 + 3.4 x6 + 0.4 x7 = 24.9
4.3 x1 + 8.2 x2 + 6.6 x3 + 4.3 x4 + 8.3 x5 + 2.9 x6 + 6.1 x7 = 40.7
which has the quite obvious, unique solution:
x1 = x2 = x3 = x4 = x5 = x6 = x7 = 1.0 (Volts)
Now, get your preferred HP calc or computer software and try and solve it using limited accuracy, say just one decimal, then four decimals, then eight decimals. See what results you get and how do they compare with the actual, unique solution, and among themselves as the limited accuracy increases.
Paraphrasing your own opening statement, I'd say that it's easy for physical world 'enthusiasts' to underestimate precision if they don't have a feel for what it means in the computational world.
Best regards from V.