HP Forums

Full Version: Before CORDIC
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

In the book "A Pocket Guide to Hewlett-Packard Computers" (printed by HP 7/70), a section gives some details on the 'Program Library', including how the scientific functions were calculated.
The target machines are the 2114, 2115 and 2116, all three 16-bit machines with a library representing floating point values in binary form (23-bit mantissa and 7-bit exponent, plus 2 sign bits).

The algorithm for log for example is wickedly precise, and doesn't use CORDIC techniques (invented by J.Volder in 1969 I believe ?).

Here is it just for the fun :
x = f . 2^i
f = mantissa (f=0 or 0.5 < f <= 1)
i = exponent (power of 2)
log(x) = log 2 * (i + z . (c1 + c2 / (c3 - z^2) ) - 1 / 2)
z = (f - 1 / sqrt(2)) / (f + 1 / sqrt(2))
c1 = 1.2920070987
c2 = 2.6398577035
c3 = 1.6567626301

The floating point representation is quite unusual (for me), for example 12 has i=4 and f=0,75

Precision is about 10^-9 over all reals.

Exponential uses a more complicated fraction, sqrt uses a first guess and two Newton loops, trigonometric lines use a Chebyshev expansion, tanh (!!) another fraction. Weird.

Sorry for the formatting, it somehow gets lost ?

Here 1 return = removed, and 2 returns = one empty line.

How do you get a 'go to next line' ??

Quote:
The algorithm for log for example is wickedly precise, and doesn't use CORDIC techniques (invented by J.Volder in 1969 I believe ?).

J.E. Volder, "The CORDIC Trigonometric Computing Techniques", IRE Transactions on Electronic Computers, EC-8(3), 1959.

Some processors use Chebyshev polynomials or rational approximations (warped or otherwise) to generate transcendentals. While CORDIC is great for limited hardware it has a fundamental serialization limit and can't really have much work parallelized.

Power series approximations are useful because they only require multiply and add operations (often simplified thru Horner's method & perhaps Estrin's method). But some functions don't have a good fit against polynomials, and rational approximations work better - if division is not painful on your architecture.

Depending on the architecture, this log approx you've mentioned could be fast or slow - it would help if the CPU offered hardware multiply and divide and a square-root primitive. [See J.F. Muller's book or Israel Koren's book - square root, reciprocal square root and division are all about the same complexity.]

The art of doing some of these approximations in conjunction with range reduction algorithms is to not have error (that is, guaranteed monotonicity) across ranges.
If there's a slight error in the result's last bit or digit - which otherwise may well be inconsequential - you can get weird things happening - numerical derivatives (delta F(x)Y/deltaX where x is very small) may give the wrong sign if the two x points cross a range boundary for the range reduction algorithm.

Seems that most modern FP CPUs now run the actual approximation for the transcendental algorithm over a VERY small range (reduces # of coefficients, etc.)

Seems like the goal now for transcendentals is to do the approx. algorithm over a very small, known, predictable range with well-characterized error bounds and then use a good range reduction algorithm and answer rescaling algorithm to compensate.

J.F. Muller's "Elementary Functions" (Elementary Approximations"???) is currently in print and is a good survey of modern technique. Cody & Waite have the classic "Software manual for the Elementary Functions" but it is out of print (great reference, w/FORTRAN examples).
IIRC, Israel Koren's book mostly stays away from transcendentals but deals w/floating point math and square root primitives and gate-level hardware tricks - freaky things like signed-digit and residual number systems, along with Radix-N SRT division (remember the Intel divide bug??)

Bill Wiese

San Jose CA

Actually the 2114/2115/2116 seem to be simple binary machines running at about 700 kHz : no FPU.

The library states floating point division (in software) in 1,3 ms typical (that's 0,000769 Mflops !).

Thanks for the references.

I thought CORDIC wasn't so early.

Hi "GE"...

GE wrote:

Quote:
Actually the 2114/2115/2116 seem to be simple binary machines running at about 700 kHz : no FPU.

The library states floating point division (in software)
in 1,3 ms typical (that's 0,000769 Mflops !).

Thanks for the references. I thought CORDIC wasn't so early.


Wow, those were the slow old days!!

Yes, IIRC, CORDIC was used to simplify navigational computations on early special-function (i.e., not generally programmable) computers - stuff that would show up in early 60s US military hardware etc. ALUs were expensive and maybe even used vacuum tubes - so 1-bit serial processing was useful.

Bill

I've seen some old military fire control computers that used analog computing technology, these replaced older systems that used cams and gears. Many of these systems solved complex problems. Some early digital computers used analog computers as peripherals for getting quick, limited precision results.