▼
Posts: 172
Threads: 13
Joined: Jul 2005
Most calculators support 8..12 decimal digits of precision. What is the highest precision supported by a desktop or handheld calculator? Some computers support quad precision (128 bit) floating point, which provides about 34 decimal digits for the mantissa.
What's the record for the lowest precision in a desktop or handheld calculator?
▼
Posts: 614
Threads: 66
Joined: Jul 2006
The Sinclair Scientific must come close to being the lowest precision. It displayed only in scientific notation - 5 digit mantissa, 2 digit exponent and was very inaccurate on many of its calculations.
But it was RPN (sort of) and did look good. I had one many years ago - put it together from a kit. Now that I've thinking of it, I think the one I put together was the programable version.
Bill
▼
Posts: 727
Threads: 43
Joined: Jul 2005
The Sinclair Scientific actually had a 6-digit mantissa; it just didn't display that 6th digit, but it was there. Of course, since the transcendental functions weren't very accurate, it didn't matter much!
It was still better than a slide rule. I got one in 1976, the ready-to-use version, for about $30, IIRC.
Posts: 536
Threads: 56
Joined: Jul 2005
how about babbage's difference engine. i think it had 31 decimal digits.
:-)
▼
Posts: 1,830
Threads: 113
Joined: Aug 2005
Babbage produced a handheld? Sensational!
8)
▼
Posts: 2,309
Threads: 116
Joined: Jun 2005
No, it was a desktop. Sorry.
▼
Posts: 1,830
Threads: 113
Joined: Aug 2005
I saw a difference engine model at IBM's Pallisades hotel in New York. It was the size of a small desk, so "desktop" could be accurate, given a well constructed desk. 8)
What I want to know is, did it do HP-IL?
Edited: 20 Dec 2005, 3:06 p.m.
▼
Posts: 2,309
Threads: 116
Joined: Jun 2005
No, it didn't have a general purpose interface. Just a dedicated printer, not entirely unlike the 82143A.
▼
Posts: 1,830
Threads: 113
Joined: Aug 2005
That sheds new light on HP's decision to not include a general purpose I/O interface in the 42S! If a printer alone was good enough for Babbage, why then, who at HP could argue with that?
One of my favorite science fiction stories of all time has Babbage perfecting the Analytic Engine in 1812 or so. By 1870, Britain rules the world, having sustained the south in the American Civil War, so that American power is divided and Yankee influence is held in check. France isn't out of the running for world power either, and both powers are engaged in a high-stakes technology race seeing who can develop the fastest steam-powered mechanical computers. And an aged, down and out Lady Ada nearly destroys the French supercomputer with an accidental secret weapon that ..
The title is "The Difference Engine" and William Gibson is one co-author. I think they mention calculators in there somewhere. 8)
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, John:
John asked:
"What is the highest precision supported by a desktop or handheld calculator?"
There are a number of SHARP models that do support double precision, i.e., up to 20 decimal digits for all computations, including transcendental functions and variable definition (DEFDBL) in programs.
For instance, both my SHARP PC-1475 and my SHARP PC-E500 do support this precision.
"What's the record for the lowest precision in a desktop or handheld calculator?"
In my own experience, Sinclair scientific models did have abysmal precision, comparable to slide rules if not worse.
Best regards from V.
Posts: 1,322
Threads: 115
Joined: Jul 2005
This will answer part of your question. It's a way of looking at it anyway. http://forensics.calcinfo.com/
This is an example of a bad ic. They blew pi by 0.00433. http://www.msdsite.com/photopost/showphoto.php?photo=290&cat=534
Some of those old ones were inaccurate but remember that compared to 5" slide rule; they were pretty good.
▼
Posts: 887
Threads: 9
Joined: Jul 2007
If they hadn't left out a "1" digit, they would have gotten it right. (Then the last digit, which was incorrect, would have dropped off the end.)
It's easy for math enthusiasts to get carried away with precision if they don't have a feel for what it means in the physical world. Although a few functions will make unacceptable error levels accumulate, most calculators have far, far more precision than most real-life situations have any use for. Audio quality much better than that of the finest cassettes can be had with only three digits to the samples. (999 is close to 1023 which is the maximum that 10 bits can represent.) A lot of digital signal processing is even done in 16-bit fixed-point arithmetic. There is no limit to what can be done with numbers, but their relevance to what they measure, control, etc. is another matter. It is difficult or impossible to measure or control pressure, weight, fluid flow, temperature, light intensity, position/displacement, field strength, etc. to precisions anywhere near what common calculators handle. When I need A/D or D/A converters for experiments or development on the workbench, 8-bit is normally enough with the appropriate scaling. Although I'm glad to have some guard digits, I usually use my HP-41 in ENG3 display mode.
Edited: 20 Dec 2005, 6:44 p.m.
▼
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, Garth:
Garth posted:
"It's easy for math enthusiasts to get carried away with precision if they don't have a feel for what it means in the physical world. Although a few functions will make unacceptable error levels accumulate, most calculators have far, far more precision than most real-life situations have any use for."
I beg to dissent. Your arguments apply only to entering measurements of physical magnitudes as inputs to some computing process, and even then there are plenty of exceptions where high-precision measurements are both possible and required, specially in nuclear physics, astronomy, etc.
But your arguments do not apply at all (and are in fact misleading) to the computational processes applied to said inputs, where each and every digit counts, and counts a lot if the resulting outputs are to be meaningful and relevant. There, when subjecting your physical measurements to some complex algorithm, you can't afford to limit the intermediate accuracy to be the same as that of the physical inputs, lest your results will be pure garbage. On the contrary, you need much higher precision, which usually increases with the size of the problem, notwithstanding the initial accuracy of the physical inputs.
For instance, many architectural and electrical engineering real-world applications require solving large systems of linear equations so in real professional life you'll frequently find yourself working with large matrices which tend to be numerically ill-conditioned more often than not. In these cases, you'll need as much precision to process them as you can get, even if your inputs are measured to just one decimal, that is if you want your results to be accurate to at least one decimal, as the inputs.
Perhaps this will require internally using 10 digits for medium matrices, or 20 digits for large matrices. Your results will still be accurate to one decimal, as the inputs, but unless you use that much higher internal accuracy throughout the whole solving process, you'll get no usable results, at all. That's why high accuracy is needed and that's why your arguments are "shortsighted", so to speak.
If in doubt, you may want to consider this example I've set up, where the solution of some engineering problem requires solving this small 7x7 system of linear equations, where the coefficients are the result of some measurement, say Volts, with just one decimal of precision:
1.3 x1 + 7.2 x2 + 5.7 x3 + 9.4 x4 + 9.0 x5 + 9.2 x6 + 3.5 x7 = 45.3
4.0 x1 + 9.3 x2 + 9.0 x3 + 9.9 x4 + 0.1 x5 + 9.5 x6 + 6.6 x7 = 48.4
4.8 x1 + 9.1 x2 + 7.1 x3 + 4.8 x4 + 9.3 x5 + 3.2 x6 + 6.7 x7 = 45.0
0.7 x1 + 9.3 x2 + 2.9 x3 + 0.2 x4 + 2.4 x5 + 2.4 x6 + 0.7 x7 = 18.6
4.1 x1 + 8.4 x2 + 4.4 x3 + 4.0 x4 + 8.2 x5 + 2.7 x6 + 4.9 x7 = 36.7
0.3 x1 + 7.2 x2 + 0.6 x3 + 3.3 x4 + 9.7 x5 + 3.4 x6 + 0.4 x7 = 24.9
4.3 x1 + 8.2 x2 + 6.6 x3 + 4.3 x4 + 8.3 x5 + 2.9 x6 + 6.1 x7 = 40.7
which has the quite obvious, unique solution:
x1 = x2 = x3 = x4 = x5 = x6 = x7 = 1.0 (Volts)
Now, get your preferred HP calc or computer software and try and solve it using limited accuracy, say just one decimal, then four decimals, then eight decimals. See what results you get and how do they compare with the actual, unique solution, and among themselves as the limited accuracy increases.
Paraphrasing your own opening statement, I'd say that it's easy for physical world 'enthusiasts' to underestimate precision if they don't have a feel for what it means in the computational world.
Best regards from V.
▼
Posts: 2,448
Threads: 90
Joined: Jul 2005
In my field, it is Finite Element Analysis where this issue of low *input* precision but high *computational* precision is an outstanding example. In fact FEA is essentially merely a gargantuan matrix problem...and low computational presicion can lead to absolutely wrong results. Of course poor choices in the boundary conditions and boundary geometry also causes outstanding failures.
Any recursive or even step-wise function or procedure can lead to unacceptable loss of precision. Even a "simple" weight-moment computation can lead astray if you do not pay attention to magnitudes vis-a-vis precision.
So low input precision is one thing--computational precision is another.
The example of the slide rule needs to be proerly examined. There was a lot of good engineering carried out on slide rules. This does not mean that the men and women using the slide rules were happy with the low precision! They were astute and often had to find ways to preserve precision--to deal with the limitations of the computational device.
When digital computing became a reality for academics and corporate scientists--on mainframes in the 1960's--there was a lot of excitement: "wow, I can finally tackle that problem--I can get acceptable precision--let's see where it goes!"
Now that we all have computers on our desktops, it is easy to forget, or overlook, the amazing excitement and boost to imagination that the digital computer provided for our predecessors.
Posts: 887
Threads: 9
Joined: Jul 2007
I did say, "a few functions will make unacceptable error levels accumulate." An example I had in mind at the time is that for a bank making amortization calculations, ten digits is nowhere near enough, even if the interest rate is expressed with far fewer digits.
Quote: Your arguments apply only to entering measurements of physical magnitudes as inputs.
I did say "outputs" too. I know what you mean; but omitting the outputs-- whether controlling a milling machine's position, a temperature, a motor's torque, or selecting circuit values, etc.-- makes math an end in itself, which it is not. That's part of my point.
2K-point FFTs work fine on 7-bit input data on an 8-bit computer using only 16-bit (less than 5 digits) non-floating-point intermediate and final results. I've often wondered why the HP-71's FFT function went to the overkill of using only full precision when few of us did even 8K-point FFTs (and had to wait 45 minutes for it) and most people who used the function at all had nowhere near enough memory to do that, and the samples usually came from an 8-bit converter.
Quote: many architectural and electrical engineering real-world applications require solving large systems of linear equations so in real professional life you'll frequently find yourself working with large matrices which tend to be numerically ill-conditioned more often than not.
I think that's like saying, "Many people prefer..." when you mean thousands out of the 300,000,000 in the U.S.. Sure there's a significant, valid number, but it's a very small part of the overall set. As an electronics circuit designer, I use matrices so seldom that I have to get the books out every time. But the hardware part of our high-end aircraft communications designs could have been designed entirely with a slide rule if that's all I had to work with. I think I could count on the fingers of one knee the number of times I've needed all the precision the HP-71 offers.
Dave Bernazzani below had a good point about using scaled integers. Contrary to popular belief, there really is life outside the realm of floating-point, and a lot of very useful computing that includes things like trig & log functions. I use scaled-integer quite a bit on the workbench-- mostly 16-bit, often with 32-bit intermediate results. High-precision floating-point is not what I'm against. What I am against is giving the computer the huge overhead of handling everything this way when it's only needed for a small percentage of computing jobs. Although calculators because of their typical usage have a greater need of floating point and high precision, the fact that most of them force everything to be handled this way makes them take another hit in speed, an area they're weak in to begin with.
Edited: 22 Dec 2005, 2:58 p.m.
Posts: 64
Threads: 10
Joined: Aug 2007
High precision in a handheld calculator is not a requirement in the field of architecture and structural engineering for buildings. Computers are used to run FEA software and to perform frame analysis. These things are not done with a calculator. For the overwhelming percentage of calculations, 3 digits after the decimal are more than enough. In the 10 years that I have been a structural engineer, I have never once needed to do a matrix, and none of the many other structural engineers that I have worked with have done one either. It is a common failing of new engineers fresh out of college that they get caught up in carrying way too many decimal places.
I am only debating what you have said about architecture. I do not doubt that other fields do require a high degree of computational precision. Architecture, and the related field of structural engineerng do not require this in a hand held calculator.
Posts: 163
Threads: 7
Joined: Jul 2007
Hello Valentin.
I'm sorry to say, this is not true at all.
The system you proposed (a variant of your 'Albillo Matrices'?) is
simply badly conditioned. It will lose about 13 digits of precision
in the process of computing a solution. This means that, unless
your inputs carry 14 digits of precision, you can't expect a
meaningful result. Since they have been measured to 2 digits only,
your problem cannot be solved.
If you perturb your matrix elements by 1e-10 (a negligible error,
right? and as far as our Volt measurements go, the same original
problem), your result will become 100% different.
It is useless to perform calculations with higher precision than
the precision of your input, period. If you do, you implicitly
enhance the precision of your input.
The accuracy of the result of a numerical calculation depends on
three things:
- the precision of the input
- the condition of the problem (condition is a measure of how badly
the output changes as the result of small input changes)
- the stability of the algorithm - a stable algorithm keeps the
errors in the same relative magnitude. If your input precision is
not high enough to overcome the loss of precision due to condition
and stability, you get meaningless answers.
Best Regards,
Werner Huysegoms
Posts: 1,041
Threads: 15
Joined: Jan 2005
Well, I agree with Garth to the extent that I often see results
with more significant digits than can be justified by the accuracy
of the inputs.
But I also agree with Valentin that often more digits are required
during the computational process for the results to have any
validity.
Consider, for example, calculators that find the standard
deviation by accumulating "running totals" of n, sum(x), and
sum(x^2), then find the standard deviation by formulas like
squareroot((sum(x^2)-sum(X)^2)/n)/n) for the population and
squareroot((sum(x^2)-sum(X)^2)/n)/(n-1)) for the sample. It's all
too easy to cause a round-off error in the sum(x^2) value in
particular. Other common formulas are
squareroot((n*sum(x^2)-sum(x)^2)/n^2) and
squareroot((n*sum(x^2)-sum(x)^2)/(n*(n-1))), making a round-off
error in n*sum(x^2) even more likely.
Regards, James
Posts: 1,477
Threads: 71
Joined: Jan 2005
There were several fixed decimal point 6-digit precision calculators. Some of the more popular models were: Commodore MM6, Novus Mathbox, Corvus Checkmate (it only had + and -, but with continuous memory!) I don't know of any 5 or less digit calculators though.
My candidate for the worst accuracy scientific is the Monroe 99. Its precision is 8 digits, but it's accuracy is far less. It returns 6.58003 for ASIN(ACOS(ATAN(TAN(COS(SIN(9)))))) but worse it "thinks" 8^8 = 16777200. Probably because it has a value of 2.7182800 for e.
▼
Posts: 901
Threads: 113
Joined: Jun 2007
My nominations for the least capable calculators are the NSC Model 600 and the Novus Mathbox 650 where those devices not only carried few digits but also only handled integers. I discussed that feature in more detail in Article No. 437 "Primitive RPN Calculators". At times the editors of the Journal of the Oughtred Society have proposed a calculating contest between slide rules and calculators. I responded that the only calculators that I knew of which would not totally overwhelm th slide rules were those two devices.
▼
Posts: 901
Threads: 113
Joined: Jun 2007
The Miida 606 has a six digit display and limits input values to six digits. It does yield answers of more than six digits in some cases:
If you enter 987659 + 987659 = you get 197531 which are the six most significant digits in the display. If you then press C (clear) you get 8 which is the least significant digit in the display. Only the 8 is used in subsequent calculations.
If you enter 98765 x 98765 = you get 975452 which are the six most significant digits in the display. If you then press C you get 5225 which are the four least significant digits in the display. If you enter a problem which has a result exceeding ten digits then some of the more significant results will be correct but some of the less significant digits may be incorrect.
No such luck with divides.
Posts: 11
Threads: 0
Joined: Jan 1970
As an aside, there is an important difference between precision vs. accuracy. In the world of desktop computers, the standard single and double precision IEEE floating point representation has made life easier and is very common but at a cost of accuracy. I suspect most calculators use a BCD or other internal representation of a number - meaning that numbers such as 4, .4, 40, etc. can all be represented exactly. The standard 8 byte IEEE floating point representation (approx 15 digits of precision) on a computer does not lend itself to such perfect representations of all numbers as the floating point must be represented by an approximation of binary fractions. For example, with IEEE floats, all numbers are converted to a normalized value plus an exponent. For the mantissa, there is a bit to represent 1/2, one for 1/4, one for 1/8, etc... the more bits you have the more accurately you can represent any given number with this approximation. But it's tricky. For example, the number 4 (or .4, or 40, or 400) cannot be accurately represented in IEEE floating point. The best you can do is 3.99999999999998 (or some variation thereof) and although you can get _very_ close, it does not quite get there with a series of binary fractions. For this reason, you may experience some loss of precision in calculations due to the accuracy of the representation, and some floating point operations may produce unexpected results in chain calculations involving IEEE floats (I haven't looked at the 16 byte floats to see how they are represented - maybe they are better or maybe it just gives larger precision using the same representation scheme). Calculators are generally better in this respect as they seem to have a better internal representation of the numbers. IEEE, however, is extremely convenient since most standard math libraries will work with it and is often optimized in hardware for fast results. There are some PC-based calculators or math algorithms that go out of their way to avoid IEEE floats - using either fixed-point math or BCD math (the GNU bc utility is one such program). Those take longer to code, require user-built trig libraries but are more accurate.
▼
Posts: 727
Threads: 43
Joined: Jul 2005
In IEEE-754 double precision floating point, which is what most computers use nowadays, the mantissa has 52 bits -- 53 if you count the implied "1" bit at the start. This means you have a relative representational error of about 1e-16.
In the BCD representation used on all the Saturn-based HPs, the mantissa is 12 decimal digits, which leads to a relative representational error of 5e-12.
Both formats use 8 bytes, but the binary format has a relative error that is 50,000 times better than the BCD format (but an exponent with a smaller range: -308..308 instead of -499..499).
From those numbers, I don't think you can call BCD "better"... It just happens to be able to represent certain numbers exactly that binary formats can't, but for numerical computation, that is a useless feature. If you want to keep round-off error in large matrix computations at a minimum, using BCD is just a waste of CPU time (and memory).
▼
Posts: 11
Threads: 0
Joined: Jan 1970
The problem of IEEE manifests itself in some common ways. First, because some numbers can't be represented properly there is a comparision issue. That is, if you take a value such as 1.2 and divide it by 3 and try to compare that to 0.4 you won't get a match. In one case you get:
0.40000000000000002
In the other:
0.39999999999999997
Exact native compares fail. This is the main reason why in medical embedded software engineering we almost always avoid IEEE floating point in favor of fixed point with a scale factor.
The second problem is with simple chain calculations that should accurately get a value close to a non-representable number but end up a shade too high due to representation issue. In this case, your chain calculation (for example) should be a value just less than 4 but instead gets represented with a value a shade over 4. Using FLOOR/CEIL type functions (often found on better computational devices) will yield the wrong result.
Maybe calling BCD "better" was the wrong term. Certianly there are BCD representations that are better than the representations that are used on the HP calculators (again, bc uses an arbitrary precision BCD for highly accurate results). Maybe I should have said that both have limitations that should be understood before use.
In order to mimimize the above problems, one has to play tricks with the numbers to put them on equal footing (i.e. find ways to compare them within a bracketed range, or convert both of them to a format that can be compared with a certian limit of precision, etc). Again, for most computations it doesn't matter, and with a bit of care you can minimize problems... but all programmers should be extremely cautious when using IEEE floats.
Here is a weak code-snippet for those interested:
#include <stdio.h>
main()
{
double f1, f2;
f1 = (double)0.4;
f2 = (double)1.2 / (double)3.0; /* This should yield 0.4 */
printf("f1 = %.17f\n", f1);
printf("f2 = %.17f\n", f2);
if (f1 == f2)
{
printf("The values match.\n");
}
else
{
printf("The values do not match.\n");
}
}
▼
Posts: 20
Threads: 0
Joined: Jan 1970
This is really a problem with floating point in general (and fixed + scale doesn't always help.)
You should never compare floating point values for equality, IEEE or not, unless you are explictitly looking for these problems.
To safely do the test a =? b where a and b are floats, use something like
abs ( abs(a) - abs(b) ) < bound
using whatever value is acceptable as the bound, 1.0e-10 being a starting point.
▼
Posts: 3,283
Threads: 104
Joined: Jul 2005
htom,
Quote:
To safely do the test a =? b where a and b are floats, use something like
abs ( abs(a) - abs(b) ) < bound
That's a little to much of abs() because it renders -1 equal to +1.
abs( a - b ) < bound
does the trick better.
Marcus
▼
Posts: 20
Threads: 0
Joined: Jan 1970
This is what comes of posting before thinking twice. You're correct.
|