▼
Posts: 980
Threads: 239
Joined: Aug 2006
Hello all.
Forgive me if this question is a bit presumptuous but, in light of the discrepancies from numerical results from HPs rounds of scientific calcs beginning with the 33s and 35s, please help me understand some things.
1I'm certain Casio, Sharp and TI models are prone to the same problems of algorithm and numerical errors.
2How did Spice, Voyager or Classics users, engineers, technicians, medical professionals and other scientists cope with those glitches (especially considering that these lineages only carried 10 digit precision)?
Just wondering
Edited: 24 Aug 2013, 7:56 p.m.
▼
Posts: 887
Threads: 9
Joined: Jul 2007
Remember that in the days of slide rules, we only had 34 significant digits, and these were used to design the Empire State building, the F16, and get to the moon and back. Certain operations require a lot more, especially where error can accumulate over thousands (or millions) of iterations (something we did not have a propensity for doing on slide rules! :D ) but the 34 digits is enough for much of engineering work. I find 16bit (about five digits), sometimes with doubleprecision (32bit) intermediate results, is enough for my workbench work, although once in a while I would like a little more. Even a very significant portion of digital signal processing has been done in 16 and 24bit scaledinteger math. I discuss these things in my article on fixedpoint/scaledinteger efficiency and 16bit lookup tables at http://wilsonminesco.com/16bitMathTables/index.html . The hex tables linked there, BTW, were formed with my HP71, which ran for weeks (in BASIC) to do them all. The programs are listed there.
▼
Posts: 1,665
Threads: 142
Joined: Jan 2009
Surveying requires more than 34 digits of accuracy, as I'm sure DB can attest. This is why in the days of slide rules we needed books of log and trig tables. When I bought my HP 35, I was able to put away my CRC tables.
Posts: 15
Threads: 2
Joined: Aug 2012
Quote:
Remember that in the days of slide rules, we only had 34 significant digits, and these were used to design the Empire State building, the F16, and get to the moon and back.
Is this actually true? Log tables and mainframes also existed, and I have to wonder if the lunar or F16 programs, for example, never used any mainframe computing or log tables at all.
▼
Posts: 248
Threads: 5
Joined: Feb 2008
Apollo missions made quite significant use of IBM mainframes:
IBM100  The Apollo Missions.
Posts: 887
Threads: 9
Joined: Jul 2007
It's not to say that slide rules were used exclusively, but they got a lot of use in the lowlevel calculations. The Empire State building was finished in 1931 though, before computers.
Posts: 776
Threads: 25
Joined: Jun 2007
As a practicing scientist (radio astronomer) I seldom needed more than a few digits of accuracy/precision. I never worried that my HPs were inadequate.
Except for NASA geodesy, where we use quadruple precision FORTRAN to make calculations accurate (or at least precise!) to perhaps 16 digits. We are concerned about submillimeter precision over the entire Earth (some 6370 Km in diameter). For instance, the plate tectonic spreading motion across the Atlantic ocean is known to an accuracy of about 0.01 mm/year. These calculations involve substantially large programs  not particularly amenable to calculators!
Posts: 66
Threads: 2
Joined: Aug 2007
Depends. Circumference of the Milky Way Galaxy is about 1.75e24 meters (120,000,000 light year diameter), so slight errors may put you in the wrong parking place on the opposite edge unless you're careful in your crossgalaxy navigation. Remember to compensate for motions while you travel.
Earth or Solar System bound, you can probably get by with five decimal digits.
The problem really comes along when you compare two numbers and make decisions on the noise after those five digits. That's when you need the high accuracy calculations.
Finance counts by the penny, and with budgets in the billions, fourteen or fifteen decimal digits (assuming the bankers are smart enough to use integer pennys) can suffice. So far.
=== edit for error check. perhaps km for m error?
120,000,000 ly diameter / 2 = 60,000,000 ly radius
60,000,000 ly * pi = 188,495,559.215 ly circumference /free42/
188,495,559.215 ly * 365.25 = 68,848,003,003.4 ldays
68,848,003,003.4 ldays * 24 * 60 * 60 = 5.9484674595e14 seconds
5.9484674595e14 seconds * 299,792,458 m/s = 1.78330568102e23 meters
oops. wait. Diameter, not radius, so double that result,
3.56661136203e23 meters.
Solar System diameter ~9e12 meters
Finding the parking space is still a problem. You might miss the Solar System!
Edited: 25 Aug 2013, 1:11 a.m. after one or more responses were posted
▼
Posts: 168
Threads: 10
Joined: Jul 2007
Unfortunately, with respect to the Milky Way, you're too high by a factor of 10^3. :)
I have experienced problems with comparing numbers like you mentioned and part of the problem is floating point arithmetic. I posted about this on MoHPC years ago and was roundly criticized for wanting 100.0099.990.01 to actually equal EXACTLY zero instead of the floating point result of 5.1156995306556E15.
Edited: 24 Aug 2013, 11:47 p.m.
▼
Posts: 887
Threads: 9
Joined: Jul 2007
(deleted after your edit)
Edited: 24 Aug 2013, 11:53 p.m.
Posts: 3,229
Threads: 42
Joined: Jul 2006
Our GPS based navigation requires way more than five digits. In ECEF coordinates, meter accuracy has about six digits. We work to centimeter or millimeter levels. So nine digits without any guards.
Want to work out a distance between two points? Squaring doubles the number of digits required  eighteen now.
 Pauli
Posts: 1,477
Threads: 71
Joined: Jan 2005
Quote:
Finance counts by the penny, and with budgets in the billions, fourteen or fifteen decimal digits (assuming the bankers are smart enough to use integer pennys) can suffice. So far.
In real life, I write financial calculation programs and at times it does bother me that the 12c, especially, has only 10 digits. This is often too few for a lot of work I do but it's close enough to help in checking the accuracy of my calculations.
Posts: 170
Threads: 7
Joined: Apr 2009
Quote:
Remember to compensate for motions while you travel.
i always find it safest to keep the restroom locked while travelling above lightspeed :)
Posts: 73
Threads: 2
Joined: Sep 2011
Matt and All,
I would imagine that "back in the day" people were more skilled at rescaling problems or otherwise shifting points of view to get the most out of the available precision.
We were looking at a problem doing quadratic curve fits with data that were "closely packed"  so to speak. We tried the equations for the fit on a small set of points like (15.386, 19.004), (15.388, 19.001) and (15.390, 19.006). Doing this in singleprecision (32bit) IEEE floating point gave meaningless results, to the point that the parabola opened downward instead of upward! Doubleprecision IEEE float was better, but not good enough. We had to go to about 80 bits to get close.
But by shifting the points to (0.006, 0.004), (0.008, 0.001), (0.010, 0.006), we were able to get spot on with 32bit float.
The problem is that the significant part of the curve is tiny compared to the magnitude of the numbers. Doing all the sums of powers needed for the fit [sum(x), sum(x*x), sum(x*y), sum(x*x*y), sum(y*y)... sum(x*x*x*x)] just amplified the problem, making the overall magnitude bigger and the significant part tinier. By shifting the curve to where the math is "easier", doing the math, then shifting back the result, we were able to get the right answer with the available processor (an industrial controller).
The worse problem was the customer engineers comparing our 80bit math result to MSExcel (IEEE Double) and not accepting it because it didn't agree with their "correct" result. We were able to prove that their "reference" result was incorrect and validate our method, both graphically and with various scaled examples. (I hope we cured a little innumeracy  for all parties concerned  in the process!)
Dale
▼
Posts: 980
Threads: 239
Joined: Aug 2006
I see your point. As I remember it, the Spice's manuals pointed to this same propensity with statistics function and suggested scaling values as well. I am not certain if it's related but, the nature of functions to be integrated on the 34C and 15C affected how accurate the integral or root would be. Thus, the manuals suggested to modify the initial guess values when solving or rewrite the integrand and adjust the limits of integration for 15C, 42S and 34C calculations.
And yes, your 'interesting portion' reminds me about the section on integration which speaks of functions that have spikes but are more relevant (hence, interesting) around other parts of the graph.
Edited: 24 Aug 2013, 11:17 p.m.
▼
Posts: 735
Threads: 34
Joined: May 2007
▼
Posts: 980
Threads: 239
Joined: Aug 2006
Quite an interesting and illuminating read! Thanks.
Posts: 4,587
Threads: 105
Joined: Jul 2005
The accuracy of any scientific or technical result is determined by the least accurate component and the functions used. Please take a look at the physical constants and their accuracies as known today (see e.g. pp. 149ff in the WP 34S manual and footnote 76 in particular). You will get an idea where more than some 5 significant digits are justified.
OTOH, in mathematical problems governed by repetitive or iterative calculations (e.g. some matrix calculations) you may need as much precision as you can get.
Problem solving still requires brains in front of the computer / calculator. Sorry  such is life.
d;)
▼
Posts: 97
Threads: 9
Joined: Nov 2011
Hello Walter,
is there a new version of the manual existing? I've found the footnote numbered with 65 on page 134...?
Greetings
peacecalc
▼
Posts: 4,587
Threads: 105
Joined: Jul 2005
Thanks for asking. I was quoting the printed edition published in February. It is mentioned on http://sourceforge.net/projects/wp34s/ and you find it for sale here.
d:)
Posts: 1,089
Threads: 32
Joined: Dec 2005
My unelaborated guess would be to calculate the propagation of error (german: Gauss Fehlerfortpflanzung) within your formulas. Maybe you need a more precice calculator to do this.
Objections?
▼
Posts: 4,587
Threads: 105
Joined: Jul 2005
Quote:
My unelaborated guess would be to calculate the propagation of error (German: Gauss Fehlerfortpflanzung) within your formulas.
Exactly the topic of footnote 76 :)
▼
Posts: 1,089
Threads: 32
Joined: Dec 2005
Apparently a useful manual ;).
Posts: 97
Threads: 1
Joined: Jun 2013
Matt
During my years of engaged Engineering, uncertainty was the critical factor for degree/level of accuracy. Anyone remember?
MEASURE with a micrometer,
MARK with a keel,
CUT with an axe!
SlideRule
