Hart, 3300



#9

Just stumbled upon this blag (Direct Digital Synthesis (DDS) on the GA144) with a reference to

Quote:
Computer Approximations by John F Hart, Wiley 1968; function 3300

Never heard of that before but seems to be kind of a Taylor approximation of the sine function:

Quote:
This approximation is
  • x(a + bx2 + cx4 + dx6)
which is good to 17 bits between 0 and 90o.

Here's a program for the HP-42S:

00 { 31-Byte Prgm }
01 LBL "HART"
02 90
03 /
04 ENTER
05 X^2
06 ENTER
07 ENTER
08 RCL* 03
09 RCL+ 02
10 *
11 RCL+ 01
12 *
13 RCL+ 00
14 *
15 END
Make sure to have the following values in the registers:
00  1.57079
01 -0.64589
02 0.07943
03 -0.00433
This table compares the values of this function with the exact result:
 0  0.00000  0.00000    0.000e+00
5 0.08716 0.08716 -3.889e-06
10 0.17365 0.17365 -3.488e-06
15 0.25882 0.25882 -2.874e-06
20 0.34202 0.34202 -2.122e-06
25 0.42262 0.42262 -1.326e-06
30 0.50000 0.50000 -5.853e-07
35 0.57358 0.57358 3.816e-09
40 0.64279 0.64279 3.667e-07
45 0.70711 0.70711 4.641e-07
50 0.76604 0.76604 3.054e-07
55 0.81915 0.81915 -4.213e-08
60 0.86603 0.86603 -4.531e-07
65 0.90631 0.90631 -7.571e-07
70 0.93969 0.93969 -7.760e-07
75 0.96593 0.96593 -3.963e-07
80 0.98481 0.98481 3.037e-07
85 0.99620 0.99619 8.426e-07
90 1.00000 1.00000 0.000e+00
The last column gives the relative error.

Then I wondered how would that compare to a true Taylor-appoximation (with the same order 7):

 0  0.00000  0.00000    0.000e+00
5 0.08716 0.08716 -9.235e-15
10 0.17365 0.17365 -2.384e-12
15 0.25882 0.25882 -6.147e-11
20 0.34202 0.34202 -6.193e-10
25 0.42262 0.42262 -3.732e-09
30 0.50000 0.50000 -1.626e-08
35 0.57358 0.57358 -5.671e-08
40 0.64279 0.64279 -1.681e-07
45 0.70711 0.70711 -4.407e-07
50 0.76604 0.76604 -1.049e-06
55 0.81915 0.81915 -2.309e-06
60 0.86602 0.86603 -4.771e-06
65 0.90630 0.90631 -9.354e-06
70 0.93968 0.93969 -1.754e-05
75 0.96590 0.96593 -3.170e-05
80 0.98475 0.98481 -5.545e-05
85 0.99610 0.99619 -9.439e-05
90 0.99984 1.00000 -1.569e-04
It didn't surprise me to notice that it's more accurate for small values but looses at 90.

Kind regards

Thomas

BTW: It apears to me that sin and cos got mixed up in the referenced article.


For those interested in the Python program I used to create the tables:
#!/usr/bin/python

from math import sin, radians

def hart(x):
x /= 90.
z = x*x
s = [1.57079, -.64589, 0.07943, -.00433]
return x*(s[0] + z*(s[1] + z*(s[2] + z*s[3])))

def taylor(x):
z = x*x
s = [1., -1./6, 1./120, -1./5040]
return x*(s[0] + z*(s[1] + z*(s[2] + z*s[3])))

for x in range(0, 91, 5):
y, z = hart(x), sin(radians(x))
# y, z = taylor(radians(x)), sin(radians(x))
t = (y - z)/z if z != 0 else y
print "%2d %8.5f %8.5f %12.3e" % (x, y, z, t)


#10

At least in statistics the Hart approximations are well known, e.g. the one for the Normal CDF is quite common.

Of course a dedicated polynomial can give better results than a Taylor series. Even the Hart approximation can be improved. I set up a near-minimax polynomial which led to the following result:

  sin(x) ~ a z + b z3 + c z5 + d z7
 
where
z = x/90
and
 
a = 1,5707946
b = -0,6459157
c = 0,0794718
d = -0,0043507

x approx. sin(x)
-----------------------------
0 0,0000000 0,0000000
5 0,0871557 0,0871557
10 0,1736480 0,1736482
15 0,2588190 0,2588190
20 0,3420202 0,3420201
25 0,4226185 0,4226183
30 0,5000005 0,5000000
35 0,5735771 0,5735764
40 0,6427883 0,6427876
45 0,7071073 0,7071068
50 0,7660447 0,7660444
55 0,8191519 0,8191520
60 0,8660248 0,8660254
65 0,9063068 0,9063078
70 0,9396917 0,9396926
75 0,9659254 0,9659258
80 0,9848082 0,9848078
85 0,9961958 0,9961947
90 1,0000000 1,0000000

For angles between 0 and 90 degrees the max. relative error is ~ 1,1*10-6. The maxima of the relative error occur near 0, 37, 67 and 85 degrees. x = 90 even returns exactly 1.

Addendum: if you prefer thinking of errors in terms of significant digits, you may use the following set of coefficients instead:

  a =  1,5707949
b = -0,6459128
c = 0,0794626
d = -0,0043447
In this case the error is less than one unit in the 6th significant digit.

Dieter


Edited: 17 Nov 2013, 12:45 p.m.


#11

Quote:
Even the Hart approximation can be improved.

It appears Hart is still better for values > 30o

 x  Hart       Dieter     sin(x)     |h-s| ? |d-s|
-------------------------------------------------
0 0.0000000 0.0000000 0.0000000 : 0 = 0
5 0.0871554 0.0871557 0.0871557 : 6 > 1
10 0.1736476 0.1736480 0.1736482 : 10 > 2
15 0.2588183 0.2588190 0.2588190 : 12 > 2
20 0.3420194 0.3420202 0.3420201 : 12 > 1
25 0.4226177 0.4226185 0.4226183 : 9 > 4
30 0.4999997 0.5000005 0.5000000 : 5 < 8
35 0.5735764 0.5735771 0.5735764 : 0 < 10
40 0.6427878 0.6427883 0.6427876 : 4 < 11
45 0.7071071 0.7071073 0.7071068 : 6 < 9
50 0.7660447 0.7660447 0.7660444 : 4 = 4
55 0.8191520 0.8191519 0.8191520 : 1 < 3
60 0.8660250 0.8660248 0.8660254 : 7 < 11
65 0.9063071 0.9063068 0.9063078 : 12 < 16
70 0.9396919 0.9396917 0.9396926 : 12 < 15
75 0.9659254 0.9659254 0.9659258 : 6 < 7
80 0.9848081 0.9848082 0.9848078 : 5 < 8
85 0.9961955 0.9961958 0.9961947 : 14 < 18
90 1.0000000 1.0000000 1.0000000 : 0 = 0
The difference to the exact value was multiplied by a somewhat arbitrary value of 224:
def ulp(x):
return round(abs(x) * 2**24)

Could the small differences in the coefficients be related to the fact that only 18 bit arithmetics was used in the original problem?

Cheers

Thomas

#12

Quote:
Addendum: if you prefer thinking of errors in terms of significant digits, you may use the following set of coefficients instead:

  a =  1,5707949
b = -0,6459128
c = 0,0794626
d = -0,0043447

In this case the error is less than one unit in the 6th significant digit.


 x  Hart       Dieter     sin(x)     |h-s| ? |d-s|
-------------------------------------------------
0 0.0000000 0.0000000 0.0000000 : 0 = 0
5 0.0871554 0.0871557 0.0871557 : 6 > 1
10 0.1736476 0.1736481 0.1736482 : 10 > 2
15 0.2588183 0.2588190 0.2588190 : 12 > 0
20 0.3420194 0.3420203 0.3420201 : 12 > 2
25 0.4226177 0.4226186 0.4226183 : 9 > 6
30 0.4999997 0.5000006 0.5000000 : 5 < 10
35 0.5735764 0.5735773 0.5735764 : 0 < 14
40 0.6427878 0.6427885 0.6427876 : 4 < 16
45 0.7071071 0.7071076 0.7071068 : 6 < 14
50 0.7660447 0.7660450 0.7660444 : 4 < 9
55 0.8191520 0.8191521 0.8191520 : 1 = 1
60 0.8660250 0.8660250 0.8660254 : 7 < 8
65 0.9063071 0.9063069 0.9063078 : 12 < 14
70 0.9396919 0.9396917 0.9396926 : 12 < 15
75 0.9659254 0.9659253 0.9659258 : 6 < 8
80 0.9848081 0.9848080 0.9848078 : 5 = 5
85 0.9961955 0.9961956 0.9961947 : 14 < 15
90 1.0000000 1.0000000 1.0000000 : 0 = 0

Still not always better than the original.

Cheers

Thomas

Edited: 17 Nov 2013, 2:11 p.m.


#13

Quote:
Still not always better than the original.

Of course the proposed approximation is not everywhere better than the one by Hart. As mentioned, the goal was a certain maximum error in terms of significant digits. And that's the Hart approximation's weak point. Your table does not show this since it only shows up at small results below 0,1 or x < 5,7 degrees. Here the Hart approximation can bei off by 3 - 4 units in the 6th place while my version stays below 1 unit.

Note: "7,9D7" means 7,9 units in the 7th significant digit.

  x         sin(x)           Dieter            Hart               err_D     err_H
---------------------------------------------------------------------------------
0,00005 8,726 6463 E-7 8,726 6383 E-7 8,726 6111 E-7 -7,9D7 -3,5D6
0,005 8,726 6462 E-5 8,726 6383 E-5 8,726 6111 E-5 -7,9D7 -3,5D6
0,05 8,726 6452 E-4 8,726 6372 E-4 8,726 6100 E-4 -7,9D7 -3,5D6
0,5 8,726 5355 E-3 8,726 5276 E-3 8,726 5004 E-3 -7,9D7 -3,5D6
1 1,745 2406 E-2 1,745 2391 E-2 1,745 2336 E-2 -1,6D7 -7,0D7
2 3,489 9497 E-2 3,489 9466 E-2 3,489 9357 E-2 -3,1D7 -1,4D6
3 5,233 5956 E-2 5,233 5911 E-2 5,233 5748 E-2 -4,6D7 -2,1D6
4 6,975 6474 E-2 6,975 6415 E-2 6,975 6199 E-2 -5,9D7 -2,7D6
5 8,715 5743 E-2 8,715 5672 E-2 8,715 5404 E-2 -7,1D7 -3,4D6
6 1,045 2846 E-1 1,045 2838 E-1 1,045 2806 E-1 -8,0D8 -4,0D7
So, as promised, the error of the proposed approximation stays below 1 unit in the 6th significant digit - for any value in the given domain, be it 50, 5, 0,5 or 0,00005 degrees. The Hart approximation will not handle this case with the same accuracy. As always, you have to decide what you want to optimize: the largest absolute error (see below) or the largest relative error (cf. my first approximation with a max. rel. error of 1,1E-6) or something else.

If you want an approximation with a minimal absolute error in [0, 90] that's possible as well. Simply use the following coefficients:

  a =  1,5707903
b = -0,6458861
c = 0,0794185
d = -0,0043227
This keeps the absolute error below 7*10-7. However, this improved accuracy mainly is due to the use of two more digits in the coefficients. The Hart approximation still is as good as it gets with five decimals instead of seven. ;-)

Dieter


Edited: 17 Nov 2013, 3:48 p.m.


#14

I finally had a look at the mentioned book from where I copied the

Index for the Trigonometric Functions:

* Error criterion: absolute

These are the coefficients:

With these more accurate values I got:

 x  Hart       sin(x)       |H - s|
-------------------------------------
0 0.0000000 0.0000000 0.000e+00
5 0.0871555 0.0871557 2.832e-07
10 0.1736477 0.1736482 4.972e-07
15 0.2588185 0.2588190 5.881e-07
20 0.3420196 0.3420201 5.303e-07
25 0.4226179 0.4226183 3.337e-07
30 0.5000000 0.5000000 4.471e-08
35 0.5735767 0.5735764 2.623e-07
40 0.6427881 0.6427876 4.997e-07
45 0.7071074 0.7071068 5.891e-07
50 0.7660449 0.7660444 4.864e-07
55 0.8191522 0.8191520 2.048e-07
60 0.8660252 0.8660254 1.717e-07
65 0.9063073 0.9063078 4.930e-07
70 0.9396920 0.9396926 5.799e-07
75 0.9659255 0.9659258 3.071e-07
80 0.9848080 0.9848078 2.502e-07
85 0.9961953 0.9961947 5.838e-07
90 0.9999994 1.0000000 5.891e-07

If I understand you correctly that should match the coefficients of your last post:

Quote:
If you want an approximation with a minimal absolute error in [0, 90] that's possible as well. Simply use the following coefficients:
  a =  1,5707903
b = -0,6458861
c = 0,0794185
d = -0,0043227
This keeps the absolute error below 7*10-7.

Can you explain why your result doesn't match Hart's solution? Though I didn't check all values it appears that the absolute error is well bellow 6*10-7.

Cheers

Thomas


#!/usr/bin/python

from math import sin, radians

def hart(x):
x /= 90.
z = x*x
s = [
+.1570791011e1,
-.6458928493e0,
+.794343442e-1,
-.4333095e-2
]
return x*(s[0] + z*(s[1] + z*(s[2] + z*s[3])))

for x in range(0, 91, 5):
y, z = hart(x), sin(radians(x))
t = abs(y - z)
print "%2d %10.7f %10.7f %12.3e" % (x, y, z, t)


#15

Quote:
Can you explain why your result doesn't match Hart's solution?

That's simple: the approximations I posted are designed to return exactly 1 for the sine of 90 degrees. That's why the version that minimizes the largest absolute error has slightly larger error maxima. In this case the max. error is 6,8*10-7 as opposed to 5,9*10-7 if this restriction is dropped.
Quote:
Though I didn't check all values it appears that the absolute error is well bellow 6*10-7

It's 5,8915*10-7 to be exact. ;-) Take the base-10 log of this and you get the quoted 6,23 digit accuracy.

As soon as the restriction to return exactly 1 for x = 90 degrees is dropped, the best approximation in terms of absolute error indeed has the coefficients you posted. Actually I get the same coefficents (ok, in two cases with slight variations in the last digit).

However, in many cases I prefer approximations with a known max. realative error so that in this case even arguments close to zero return accurate results. That's what you get with the other two sets of coefficients I posted.

By the way: there is a better way to approximate the sine function. Simply have the interval reduced to 45 degrees max. With the same type of polynominal this should result in an absolute error between 10-8 and 10-9. Values between 45 and 90 degrees are then handled with a simple transformation (which includes a square root).

Edit: in terms of absolute error something like 1,5*10-9 should be possible, or a bit more if the relative error is minimized.

Dieter


Edited: 18 Nov 2013, 7:58 p.m.


#16

Just had a look at the Wikipedia article "Minimax approximation algorithm". Did you use the "Remez algorithm"? Or "Chebyshev polynomials of the first kind"?

It appears that these algorithms assume unlimited precision. Is it okay to just use the values of the coefficients to the precision provided by the machine? Or could the result possibly be improved for say an 18bit machine?

Chuck Moore probably just cut off the values and was happy with the result:

Quote:
This provides 5-digit accuracy.

Cheers and thanks for taking the time to answer my questions

Thomas


#17

I actually use an old copy of Excel and a, well, "manual implementation" of an algorithm that essentially matches the Remez method. Others may use Mathematica or Maple which offer such algorithms out of the box. However, the manual way allows to design an approximation that can be taylored exactly to your needs.

Limited precision in the coefficients may lead to a more or less "under-optimized" result, i.e. the error does not oscillate nicely between an upper and lower bound, as it is the result of an exact minimax solution. Instead the error maxima may be somewhat larger here and somewhat less there. If the number of digits in the coefficients is limited, some tweaking of the final digits may improve the result (i.e. the error is distributed more uniformely). I assume this has also been done with the five-digit coefficients: here and there they are off by one digit compared to the exact result rounded to five decimals, but this way the error is more uniform and the max. error becomes slightly less, i.e. closer to the value possible with a unlimited-precision set of coefficients. They even add up to 1, so the original limited-precision approximation is exact at 90 degrees. ;-)

Dieter


Edited: 18 Nov 2013, 8:36 p.m.


#18

Quote:
I assume this has also been done with the five-digit coefficients: here and there they are off by one digit compared to the exact result rounded to five decimals

Am I missing something?

>>> s = [
... +.1570791011e1,
... -.6458928493e0,
... +.794343442e-1,
... -.4333095e-2
... ]
>>> for x in s:
... print "%8.5f" % x
...
1.57079
-0.64589
0.07943
-0.00433

Quote:
Make sure to have the following values in the registers:
00  1.57079
01 -0.64589
02 0.07943
03 -0.00433


Quote:
They even add up to 1

True:
>>> reduce(lambda x, y: x+y, map(lambda x: round(x, 5), s))
1.0
While:
>>> reduce(lambda x, y: x+y, s)
0.99999941090000011


Cheers

Thomas


#19

Quote:
Am I missing something?

I noticed this slight variation in the last (fifth) decimal when I compared the original coefficients with the ones I posted for an approximation with minimal absolute error. Since the five-digit approximation returned exactly 1 for 90 degrees I assumed that this was intentional and I did so with my solution as well:
     my solution     rounded 5D     Hart 5D
--------------------------------------------
a = 1,5707903 1,57079 1,57079
b = -0,6458861 -0,64589 -0,64589
c = 0,0794185 0,07942 0,07943
d = -0,0043227 -0,00432 -0,00433
Here is what this slight change will do:

The black graph is the error with the exact approximation and no error at x=90,

the blue one after rounding this to five decimals,

and the much better red line is the error after a slight tweak in the last digit of c and d (...3 instead of ...2).

The green graph finally is the best approximation in terms of absolute error without the restriction that the error vanishes at x=90. In other words: that's the full-precision Hart approximation.

Quote:
>>> reduce(lambda x, y: x+y, s)
0.99999941090000011

Whoah... a seventeen-digit sum of four values with just ten decimals each. Binary floating point arithmetics is evil indeed. ;-)

Dieter


Edited: 19 Nov 2013, 7:08 a.m.

#20

That's what I got with your coefficients:

 x  Dieter     sin(x)       |D - s|
-------------------------------------
0 0.0000000 0.0000000 0.000e+00
5 0.0871554 0.0871557 3.216e-07
10 0.1736476 0.1736482 5.672e-07
15 0.2588184 0.2588190 6.774e-07
20 0.3420195 0.3420201 6.225e-07
25 0.4226179 0.4226183 4.115e-07
30 0.4999999 0.5000000 9.218e-08
35 0.5735767 0.5735764 2.558e-07
40 0.6427881 0.6427876 5.370e-07
45 0.7071074 0.7071068 6.633e-07
50 0.7660450 0.7660444 5.800e-07
55 0.8191523 0.8191520 2.911e-07
60 0.8660253 0.8660254 1.239e-07
65 0.9063073 0.9063078 5.118e-07
70 0.9396919 0.9396926 6.772e-07
75 0.9659254 0.9659258 4.601e-07
80 0.9848079 0.9848078 1.239e-07
85 0.9961954 0.9961947 6.597e-07
90 1.0000000 1.0000000 0.000e+00


def dieter(x):
x /= 90.
z = x*x
s = [
1.5707903,
-0.6458861,
0.0794185,
-0.0043227,
]
return x*(s[0] + z*(s[1] + z*(s[2] + z*s[3])))

#21

Quote:
That's what I got with your coefficients:

Right. As already mentioned in the previous post: please note that the approximation is exact at x = 90. That's why the largest absolute error here is 6,8*10-7. With unlimited precision it should be something like 6,7535*10-7.

Dieter


Forum Jump: