![]() |
Hart, 3300 - Printable Version +- HP Forums (https://archived.hpcalc.org/museumforum) +-- Forum: HP Museum Forums (https://archived.hpcalc.org/museumforum/forum-1.html) +--- Forum: Old HP Forum Archives (https://archived.hpcalc.org/museumforum/forum-2.html) +--- Thread: Hart, 3300 (/thread-256069.html) |
Hart, 3300 - Thomas Klemm - 11-17-2013 Just stumbled upon this blag (Direct Digital Synthesis (DDS) on the GA144) with a reference to Quote:
Never heard of that before but seems to be kind of a Taylor approximation of the sine function: Quote:
Here's a program for the HP-42S: 00 { 31-Byte Prgm }Make sure to have the following values in the registers: 00 1.57079This table compares the values of this function with the exact result: 0 0.00000 0.00000 0.000e+00The last column gives the relative error.
Then I wondered how would that compare to a true Taylor-appoximation (with the same order 7): 0 0.00000 0.00000 0.000e+00It didn't surprise me to notice that it's more accurate for small values but looses at 90.
Kind regards
BTW: It apears to me that sin and cos got mixed up in the referenced article. For those interested in the Python program I used to create the tables: #!/usr/bin/python Re: Hart, 3300 - Dieter - 11-17-2013 At least in statistics the Hart approximations are well known, e.g. the one for the Normal CDF is quite common.
Of course a dedicated polynomial can give better results than a Taylor series. Even the Hart approximation can be improved. I set up a near-minimax polynomial which led to the following result: sin(x) ~ a z + b z3 + c z5 + d z7For angles between 0 and 90 degrees the max. relative error is ~ 1,1*10-6. The maxima of the relative error occur near 0, 37, 67 and 85 degrees. x = 90 even returns exactly 1.
Addendum: if you prefer thinking of errors in terms of significant digits, you may use the following set of coefficients instead: a = 1,5707949In this case the error is less than one unit in the 6th significant digit. Dieter
Edited: 17 Nov 2013, 12:45 p.m.
Re: Hart, 3300 - Thomas Klemm - 11-17-2013 Quote:
It appears Hart is still better for values > 30o x Hart Dieter sin(x) |h-s| ? |d-s|The difference to the exact value was multiplied by a somewhat arbitrary value of 224: def ulp(x): Could the small differences in the coefficients be related to the fact that only 18 bit arithmetics was used in the original problem?
Cheers Re: Hart, 3300 - Thomas Klemm - 11-17-2013 Quote: x Hart Dieter sin(x) |h-s| ? |d-s| Still not always better than the original.
Cheers Edited: 17 Nov 2013, 2:11 p.m.
Re: Hart, 3300 - Dieter - 11-17-2013 Quote:Of course the proposed approximation is not everywhere better than the one by Hart. As mentioned, the goal was a certain maximum error in terms of significant digits. And that's the Hart approximation's weak point. Your table does not show this since it only shows up at small results below 0,1 or x < 5,7 degrees. Here the Hart approximation can bei off by 3 - 4 units in the 6th place while my version stays below 1 unit.
Note: "7,9D7" means 7,9 units in the 7th significant digit. x sin(x) Dieter Hart err_D err_HSo, as promised, the error of the proposed approximation stays below 1 unit in the 6th significant digit - for any value in the given domain, be it 50, 5, 0,5 or 0,00005 degrees. The Hart approximation will not handle this case with the same accuracy. As always, you have to decide what you want to optimize: the largest absolute error (see below) or the largest relative error (cf. my first approximation with a max. rel. error of 1,1E-6) or something else.
If you want an approximation with a minimal absolute error in [0, 90] that's possible as well. Simply use the following coefficients: a = 1,5707903This keeps the absolute error below 7*10-7. However, this improved accuracy mainly is due to the use of two more digits in the coefficients. The Hart approximation still is as good as it gets with five decimals instead of seven. ;-) Dieter
Edited: 17 Nov 2013, 3:48 p.m.
Re: Hart, 3300 - Thomas Klemm - 11-18-2013 I finally had a look at the mentioned book from where I copied the Index for the Trigonometric Functions:
* Error criterion: absolute These are the coefficients:
With these more accurate values I got:
x Hart sin(x) |H - s| If I understand you correctly that should match the coefficients of your last post:
Quote: Can you explain why your result doesn't match Hart's solution? Though I didn't check all values it appears that the absolute error is well bellow 6*10-7.
Cheers
#!/usr/bin/python Re: Hart, 3300 - Thomas Klemm - 11-18-2013 That's what I got with your coefficients:
x Dieter sin(x) |D - s|
def dieter(x): Re: Hart, 3300 - Dieter - 11-18-2013 Quote:That's simple: the approximations I posted are designed to return exactly 1 for the sine of 90 degrees. That's why the version that minimizes the largest absolute error has slightly larger error maxima. In this case the max. error is 6,8*10-7 as opposed to 5,9*10-7 if this restriction is dropped. Quote:It's 5,8915*10-7 to be exact. ;-) Take the base-10 log of this and you get the quoted 6,23 digit accuracy. As soon as the restriction to return exactly 1 for x = 90 degrees is dropped, the best approximation in terms of absolute error indeed has the coefficients you posted. Actually I get the same coefficents (ok, in two cases with slight variations in the last digit). However, in many cases I prefer approximations with a known max. realative error so that in this case even arguments close to zero return accurate results. That's what you get with the other two sets of coefficients I posted. By the way: there is a better way to approximate the sine function. Simply have the interval reduced to 45 degrees max. With the same type of polynominal this should result in an absolute error between 10-8 and 10-9. Values between 45 and 90 degrees are then handled with a simple transformation (which includes a square root). Edit: in terms of absolute error something like 1,5*10-9 should be possible, or a bit more if the relative error is minimized. Dieter
Edited: 18 Nov 2013, 7:58 p.m.
Re: Hart, 3300 - Dieter - 11-18-2013 Quote:Right. As already mentioned in the previous post: please note that the approximation is exact at x = 90. That's why the largest absolute error here is 6,8*10-7. With unlimited precision it should be something like 6,7535*10-7.
Dieter
Re: Hart, 3300 - Thomas Klemm - 11-18-2013 Just had a look at the Wikipedia article "Minimax approximation algorithm". Did you use the "Remez algorithm"? Or "Chebyshev polynomials of the first kind"? It appears that these algorithms assume unlimited precision. Is it okay to just use the values of the coefficients to the precision provided by the machine? Or could the result possibly be improved for say an 18bit machine?
Chuck Moore probably just cut off the values and was happy with the result: Quote:
Cheers and thanks for taking the time to answer my questions Re: Hart, 3300 - Dieter - 11-18-2013 I actually use an old copy of Excel and a, well, "manual implementation" of an algorithm that essentially matches the Remez method. Others may use Mathematica or Maple which offer such algorithms out of the box. However, the manual way allows to design an approximation that can be taylored exactly to your needs. Limited precision in the coefficients may lead to a more or less "under-optimized" result, i.e. the error does not oscillate nicely between an upper and lower bound, as it is the result of an exact minimax solution. Instead the error maxima may be somewhat larger here and somewhat less there. If the number of digits in the coefficients is limited, some tweaking of the final digits may improve the result (i.e. the error is distributed more uniformely). I assume this has also been done with the five-digit coefficients: here and there they are off by one digit compared to the exact result rounded to five decimals, but this way the error is more uniform and the max. error becomes slightly less, i.e. closer to the value possible with a unlimited-precision set of coefficients. They even add up to 1, so the original limited-precision approximation is exact at 90 degrees. ;-) Dieter
Edited: 18 Nov 2013, 8:36 p.m.
Re: Hart, 3300 - Thomas Klemm - 11-18-2013 Quote: Am I missing something?
>>> s = [
Quote:
Quote:True: >>> reduce(lambda x, y: x+y, map(lambda x: round(x, 5), s))While: >>> reduce(lambda x, y: x+y, s)
Re: Hart, 3300 - Dieter - 11-19-2013 Quote:I noticed this slight variation in the last (fifth) decimal when I compared the original coefficients with the ones I posted for an approximation with minimal absolute error. Since the five-digit approximation returned exactly 1 for 90 degrees I assumed that this was intentional and I did so with my solution as well: my solution rounded 5D Hart 5DHere is what this slight change will do:
The black graph is the error with the exact approximation and no error at x=90,
The green graph finally is the best approximation in terms of absolute error without the restriction that the error vanishes at x=90. In other words: that's the full-precision Hart approximation. Quote:Whoah... a seventeen-digit sum of four values with just ten decimals each. Binary floating point arithmetics is evil indeed. ;-) Dieter
Edited: 19 Nov 2013, 7:08 a.m.
|