Accuracy and Precision Specifications for Built-In Functions



#13

Greetings all, I am wondering where, or whether, it is possible to find the accuracy and precision data for HP calculators' internal functions, compared to some gold standard.

For example, when on my HP42S in radians mode I calculate sin(1)= 8.414170984808E-1, how do I know every single of those 12 digits is correct? And correct compared to what?

This is why I ask. As largely an academic exercise I have been trying to extend the rational function approximations of the cumulative normal distribution found in Abramowitz and Stegun (in turn based on approximations of the error function given in Cecil Hastings' Approximations for Digital Computers) in order to lower the absolute error a couple of more decimal places. I have used a Levenberg-Marquart curve-fitting routine I wrote for Maple some years back to extend Hastings' approximations by a few more terms in the polynomial denominator, using as the "gold standard" in this case the Maple's built-in erf() routine and a few hundred points of this curve. I run the estimates with 16 or more significant digits, but round the resulting coefficients to 12 digits for use in my HP calculators. With a little transformation of the independent variable, I have a rational function like the Hastings ones that computes the cumulative normal probability distribution for positive arguments with an absolute error of at worst about 2.7e-9 and usually quite a bit better.

In my HP48GX emulators, I want to see how this measures up compared to that calculator's related built in function--basically, I want to see how MyFunction(z) compares with what it purports to approximate, << -> z << 0 1 z NEG UTPN >> >>, when z >= 0. But, for the comparison to make sense, I need to know how much faith I can put in the accuracy and precision of the HP48G's internal routine for UTPN. Is it accurate? Compared to what? When 12 digits are returned to me, are all "correct" according to some gold standard, or are only the first 9 or 10 valid, and the rest "noise"?

I don't know if HP makes such technical info routinely available to the user and programmer, but if so is it around to be found?

Les


#14

The BCD20 library, used in Free42 Decimal, has implementations of trig and logarithmic functions that are based on a combination of argument reduction and Taylor series. Reducing arguments cleverly allows the series to converge rapidly, so only a few terms are needed.


This approach should be easy to adapt for your purposes; simply use arbitrary-precision math (like the Unix bc program) to implement the algorithms.

- Thomas


#15

Quote:
The BCD20 library, used in Free42 Decimal, has implementations of trig and logarithmic functions that are based on a combination of argument reduction and Taylor series. Reducing arguments cleverly allows the series to converge rapidly, so only a few terms are needed.

Thomas --

This, to me, has always seemed a "common sense" approach to calculation of sine and cosine. MOD (angle, 360) should be performed for an angle in degrees. Use of MOD and trigonometric identities for sine and cosine in a complementary fashion can enable those functions to be computed for any angle, using a radian-valued argument whose magnitude is no greater than pi/4 (~= 0.785398). This may entail negation, and/or taking sine when cosine was requested (or vice-versa).

These Taylor series will converge quickly, due to the factorial term in the denominator, and also that the absolute value of input argument x is less than unity, making xn -> 0 as n -> infinity.

Of course, tangent and the arc functions are a bit different.

An HP Journal article from June 1977 ("Personal Calculator Algorithms II: Trigonometric Functions") from the MoHPC CD/DVD set shows how they used to be done.

Another HP Journal article is "Algorithms and Accuracy in the HP-35", which references a 1959 paper introducing CORDIC. (All three articles about the HP-35 from the June 1972 HP Digest are on the MoHPC CD/DVD set, as well as from HP's website.)

http://www.hpl.hp.com/hpjournal/72jun/jun72a2.htm

The procedures described seem somewhat complicated. I don't know if other methods are now utilized.

Regards,

-- KS

Edited: 12 June 2006, 10:49 p.m.


#16

Hello, it is quite unusual to calculate trigonometric functions with series nowadays. I thought everyone had shifted to CORDIC ?

Are series any faster or any more precise than CORDIC ?

CORDIC works very well in BCD, and (IMHO) needs much less math wizardry. So, why did you chose series ? I would also be interested in your range reduction algorithm, I believe that difficult precision problems usually surface at this very step.


#17

Quote:
Hello, it is quite unusual to calculate trigonometric functions with series nowadays. I thought everyone had shifted to CORDIC ?


Are series any faster or any more precise than CORDIC ?


CORDIC works very well in BCD, and (IMHO) needs much less math wizardry. So, why did you chose series ? I would also be interested in your range reduction algorithm, I believe that difficult precision problems usually surface at this very step.

As far as I know, the reason why cordic algorithms became common in calculators was because they don't require a lot of full-precision constants, while Taylor series implementations typically do. These days, on the other hand, code size is hardly ever an issue any more.

I never actually "chose" to use Taylor series; when I decided to create a decimal version of Free42, I didn't want to write any floating-point code myself, so I simply used what was available -- and the best choice (in terms of functionality and licensing) just happened to be BCD20 .

The reductions used in BCD20 go much further than merely assuring that |x| < 1. For example, the sine function reduces x to lie between 0 and pi/32; with this reduction, the terms of the Taylor series shrink extremely quickly. Similar tricks can be used to speed up the natural log, which can otherwise be a dog.

If you're interested in the details, look at the bcdmath.cc file in the BCD20 package, or get the Free42 source package and look for free42/common/bcdmath.cc.

- Thomas

Edited: 13 June 2006, 6:59 a.m.


#18

Thomas, is the BCD20 library code freely available for inspection, or is is proprietary? I am not much of a programmer, but I can decipher C code enough to take a guess what is going on.

On a different tangent, I should note that my Levenberg-Marquhart estimates are least-squares fits, so there is a lot more "wobble" in the absolute error curve when the argument is small, but the estimate is quite precise in absolute terms the bigger the argument gets. This differs from the classic Cecil Hastings approximations, which use as their fitting criteria the minimization of of the absolute value of maximum absolute error over the domain in question. This explains why all of the Hastings error curves have the peaks and valleys nicely lined up.

I really wish that Hastings' classic 1955 text were more lucid. It was complied from internally circulated handouts and filmstrips supporting related lectures. Sort of old school proto-PowerPoint. And like many PowerPoint handouts they are pretty inscrutable for someone not at the original presentation ;)

Les


#19

Quote:
Thomas, is the BCD20 library code freely available for inspection, or is is proprietary? I am not much of a programmer, but I can decipher C code enough to take a guess what is going on.

Disregard this question. You provide a link to the voidware site in an earlier post.

Thanks!

#20

Les Wright said:

As largely an academic exercise I have been trying to extend the rational function approximations of the cumulative normal distribution found in Abramowitz and Stegun (in turn based on approximations of the error function given in Cecil Hastings' Approximations for Digital Computers) in order to lower the absolute error a couple of more decimal places. I have used a Levnberg-Marquart curve-fitting routine I wrote for Maple some years back to extend Hastings' approximations by a few more terms in the polynomial denominator, using as the "gold standard" in this case the Maple's built-in erf() routine and a few hundred points of this curve. I run the estimates with 16 or more significant digits, but round the resulting coefficients to 12 digits for use in my HP calculators.
With a little transformation of the independent variable, I have a rational function like the Hastings ones that computes the cumulative normal probability distribution for positive arguments with an absolute error of at worst about 2.7e-9 and usually quite a bit better.

In my HP48GX emulators, I want to see how this measures up compared to that calculator's related built in function--basically, I want to see how MyFunction(z) compares with what it purports to approximate, << -> z << 0 1 z NEG UTPN >> >>, when z >= 0. But, for the comparison to make sense, I need to know how much faith I can put in the accuracy and precision of the HP48G's internal routine for UTPN. Is it accurate? Compared to what? When 12 digits are returned to me, are all "correct" according to some gold standard, or are only the first 9 or 10 valid, and the rest "noise"?

In the first paragraph I've quoted, you said you used Maple for your gold standard. Then in the second paragraph, you ask for "...some gold standard." What's wrong with using Maple for the "gold standard" in your current activities?

I checked the accuracy of the HP48G UPTN function for values ( 0 1 x on the stack, then press UTPN) from {0 1 -6} to {0 1 47}, for every integer value of x from -6 to 47, and compared the result with the value given by Mathematica. The HP48G returned the correct value for every case except for x=27, where the HP48G got 7.38948100688E-161, but the correct (12-digit) value is 7.38948100689E-161. The correct 16-digit value is 7.389481006885018E-161, and you can see that the HP failed to round up, thus giving an error of just larger than .5 ULP. This suggests to me that the HP may be able to give a result accurate to about 1 ULP.

I think what I would do if I wanted to test MyFunction(z), is to run MyFunction on the HP, compare to the result returned by the HP's built-in UPTN, and if there was a discrepancy, then compare with Maple or Mathematica. If MyFunction returns the same result as UTPN, this doesn't guarantee that they are both correct, but I suspect that most of the time they would be. Since the built-in UTPN is so good, this would be a rapid first round of testing.


#21

Quote:
I checked the accuracy of the HP48G UPTN function for values ( 0 1
x on the stack, then press UTPN) from {0 1 -6} to {0 1 47}, for
every integer value of x from -6 to 47, and compared the result
with the value given by Mathematica. The HP48G returned the
correct value for every case except for x=27, where the HP48G got
7.38948100688E-161, but the correct (12-digit) value is
7.38948100689E-161. The correct 16-digit value is
7.389481006885018E-161, and you can see that the HP failed to
round up, thus giving an error of just larger than .5 ULP. This
suggests to me that the HP may be able to give a result accurate
to about 1 ULP.

Probably, but an alternative explanation would be that the
16-digit value 7.389481006885018E-161 isn't correct. Has anyone
checked these values by another method?

In any case, I'm glad to read that the 48 series UTPN function is
that close; it should suffice for my purposes.

Thanks for checking.

Regards,
James


#22

I said:

I checked the accuracy of the HP48G UPTN function for values ( 0 1 x on the stack, then press UTPN) from {0 1 -6} to {0 1 47}, for every integer value of x from -6 to 47, and compared the result with the value given by Mathematica. The HP48G returned the correct value for every case except for x=27, where the HP48G got 7.38948100688E-161, but the correct (12-digit) value is 7.38948100689E-161. The correct 16-digit value is 7.389481006885018E-161, and you can see that the HP failed to round up, thus giving an error of just larger than .5 ULP. This suggests to me that the HP may be able to give a result accurate to about 1 ULP.

Then James Prange said:

Probably, but an alternative explanation would be that the 16-digit value 7.389481006885018E-161 isn't correct. Has anyone checked these values by another method?

The usual reference books of tables typically don't have values to 16 digits, especially for higher functions. They may have a few values for small, simple arguments to many digits for reference, but not for all arguments elsewhere in the tables.

And, what will you do if you need a really large number of digits, say as many as 100, or 1000, digits? Nowadays, the answer to this question is to use one of the so-called mathematical assistant programs such as Mathematica, Maple, Derive, Gauss, Maxima, etc. These programs typically can compute almost any mathematical function you can imagine to hundreds, or thousands, of decimal places.

If you need some function to 500 decimal places, how can you verify its correctness? There are no printed books of tables to be found that have that many digits. The only option is to compare the results of a couple (or more) of mathematical assistant programs. Or, I suppose you could use an arbitrary precision package (which will probably also be one of these programs), and write an expression yourself for some function you care about, with many hundreds of terms. Then how will you be confident that your result is correct? Probably by comparing with the result from Mathematica or Maple, etc.

I have done these intercomparisons many times, and I have come to trust Mathematica, especially if all I need is, say, 16 digits.

But, since you ask, I computed UPTN(0,1,27) to 1000 digits on both Mathematica and Maple, and the results agree exactly.

I don't bother with Abramowitz & Stegun, or Jahnke & Emde, for numerical values any more; Mathematica is more convenient and accurate. For algorithms or formulas, though, you can't beat Abramowitz & Stegun.


#23

Thank you for this thoughtful reply. Implicit in my original post was the question "What is the gold standard?" I feel I have some sort of answer.

As a proud Canadian who almost went to the University of Waterloo to study engineering (I ended up pursuing something very different at a different school), I love Maple and still use a decade old version I got cheaply as a student. Even this well-outdated version continues to amaze me by its power and shear fun value. I understand that the newest versions of Maple and Mathematica are infinitely more impressive.

Les


#24

The "gold standard" should be mathematics and consistancy, not some other (possibly not golden-valued at the value checked) implementation. For example, sin(x) (for 0 <= x <= pi/2) should be increasing for all x (well, actually, it should be non-decreasing, I suppose.)

These calculations should be correct, preferably without an error in the least significant digit, for example.

arcsin(sin(x))= x

arccos(cos(x))= x

sin(x)^2+cos(x)^2 = 1

sin(x)/cos(x) = tan (x)

Most math functions have a number of such identities and characteristics. Once you get close to providing good values, strive to make the math correct as well as the values.

(Horror story -- spent days trying to debug a trancendental routine that would not provide correct values past about the 23 bit; the problem was in the standard I was trying to match!)

Edited: 15 June 2006, 12:03 p.m.


Possibly Related Threads…
Thread Author Replies Views Last Post
  HP Prime numerical precision in CAS and HOME Javier Goizueta 5 2,519 11-16-2013, 03:51 AM
Last Post: Paul Dale
  [41CL] New Extra Functions version Monte Dalrymple 0 1,161 11-08-2013, 04:32 PM
Last Post: Monte Dalrymple
  HP Prime: in need of help with defining functions Alberto Candel 14 4,517 10-27-2013, 10:48 AM
Last Post: Alberto Candel
  HP Prime spreadsheet functions SanS 0 1,948 10-04-2013, 04:23 AM
Last Post: SanS
  Stats functions on the HP34S Nicholas van Stigt 5 2,104 09-24-2013, 02:45 AM
Last Post: Nick_S
  Trig Functions Howard Owen 11 3,717 09-16-2013, 02:53 PM
Last Post: Fred Lusk
  50g piecewise functions Kurt Fankhauser 6 2,431 09-15-2013, 08:01 PM
Last Post: Kurt Fankhauser
  How much accuracy does one actually need? Matt Agajanian 23 6,020 08-26-2013, 12:46 PM
Last Post: Kimberly Thompson
  Missing functions on the HP Prime!!!??? :-( Namir 6 2,397 08-22-2013, 08:40 AM
Last Post: Gilles Carpentier
  [hp 50g]Recall quickly a built-in function Pier Aiello 10 2,942 08-05-2013, 09:38 PM
Last Post: Robert Prosperi

Forum Jump: