exponents in 14-nybble BCD floating-point representations


I need to come out of lurk mode and ask you all a couple of questions about the 56-bit BCD values that many HP calculators work with. I considered emailing Eric, Hugh, Luiz or Valentin but on reflection I decided that the answers might be of interest to the wider community at MoHPC. If you find the esoteria of calculator implmentation as stimulating as drying paint you'd best skip to the next topic now. ;-)

First the assumption. If I've got this wrong then all is lost and someone should shame me into silence.

The 14-nybble FP encoding reserves nybbles 0-2 for the exponent.

+ve exponents : 0 <= e <= 99 are encoded directly as BCD-000 through BCD-099.

-ve exponents : -99 <= e <= -1 are encoded using a 3-digit, 10's complement form which uses BCD-901 through BCD-999.

Now the questions:

1. Why did the designers not go the whole hog and fully commit to a 10's complement exponent offering values -500 <= e < 500?

2. Given that they decided to expose a signed, 2-digit exponent to the user, why complement it at all?

I've been pondering question 2 for several weeks now. I'm guessing that by pre-complementing the negative exponents, the processing time was paid back in the future. This appears to advantage multiplication over division. In theory, division requires that the exponent be complemented a second time, thereby ensuring that there would never be a pay back.

Is there a theory (or an analysis) which shows that within the target application domain(s) of the pre-1990's calculators, multiplication is performed more frequently?


PS: since I don't post too often I'll remind you that I can barely spell mathematics. ;-)

PPS: to the users of my 16C simulation, yes this does mean that the 14-nybble BCD FP implementation is moving forward. I have two quite different implementations that I'm playing with: a brute force one and another that is so "clever" that debugging it is becoming a pain. I'm leaning towards the brute force one. It's much more fun to watch while stepping through it with the debugger.


2. Given that they decided to expose a signed, 2-digit exponent to the user, why complement it at all?

As you suggested yourself, it saves CPU cycles, and it also reduces code size. The advantage of the "complement" representation over the "signed" representation is that you can perform addition and subtraction without having to code separate cases for positive and negative values. When performing a floating-point multiplication, you simply add the exponents; when performing a division, you subtract them; finally, you perform a range check, and you're done. When using "signed" representations, the logic becomes much more complicated.

The drawback of using the "complement" notation is that the conversion to "signed" has to be performed every time the number is rendered on the display, but for a programmable calculator, that's less important than the time saved on EVERY floating-point instruction.

- Thomas

Edited: 12 Apr 2006, 11:12 a.m.


Hi, Cameron; good reading your posts again (BTW, is it an impression of mine or you've been 'out' for a while?)

Thank you for considering my name as a possible 'consultant'. Same applies when I thank you for deciding to share the subject d8^D. The more people reasoning about it, the best.

BCD coding is something I take much care when explaining to my students. I do not go in deep when considering all possibilities because it is a representation 'closed to human', and the system might provide support to efficiently handle BCD representation and related operations. Otherwise, all BCD handling routines should be written to allow them to generate the expected results.

Following considerations are based on my own observations and may not be accurate, neither correct if detailed internals are not the way I envision.

I remember finding many references to BCD coding as usually addressed to small, portable systems, but I am not sure if the same applies to current devices. Based on what I know, when your system is portable and extensively handles floating point operations, if BCD handling is embedded in the system, final coding is smaller and more efficient. I think this has to do with Thomas Okken considerations.

The HP41 nut processor is 56-bit based, and as the HP41 is based on its predecessors, at the same time Voyagers are based on it, taking the HP41 as part of this subject may lead us to a common sense. There are some particularities about the HP41 internals that came to my attention after reading some material about MCODE. The way nibbles are interpreted and handled may also reflects how the special, system registers are organized (a, b, c, d, etc.). The ‘three-nibble’ based engine is particularly effective when we have in mind that the nut processor handles 10-bit instruction codes.

If we consider technological advancements, the RISC-based processors are faster even when we consider that they demand bigger source codes (codes with more lines) when compared to CISC-based processors. If we consider that the HP42S emulates all basic HP41 functionality and that it does not use the same memory structure, then we must consider that all system code is new, and that it handles numbers in a different way. In fact, it is a lot easier today to enhance functionality by keeping (or even reducing) hardware ‘size’ and increasing clock frequency plus the number of lines in the main code.

SO… (it is about to conclude, just a bit more) I for one consider that the floating point organization chosen by the designers for the 56-bit based processors was something like the best ‘cost × efficiency’ balance. If the system already offers some BCD functionality, let’s take advantage over it. At that time, memory chips were not inexpensive as they are today, and time spent writing code might also consider time programmers spent to know what the processor the code was written for was capable of doing. Today, using a -500 to 500 exponent of ten is a matter of software design, not necessarily a restriction of the processor design. But we can go back there and point out that the HP71 used this exponent range… As you wrote:

I have two quite different implementations that I'm playing with: a brute force one and another that is so "clever" that debugging it is becoming a pain. I'm leaning towards the brute force one. It's much more fun to watch while stepping through it with the debugger.
I’d guess that the one you call the ‘brute force’ has the bigger code and does not care too much for particular resources, meaning you mainly wrote all you needed. The so called ‘clever’ one makes me think that you decided to ‘shrink’ the code and use more inner resources, right? As a result, you consider the second one a complete pain to debug. Well, at least you have the option to go ahead with the larger memory space. As Jacques Laporte mentioned in his post about the HP35 :
Reading your words “remarkable mind job”, I think to the man who debugged this code, under maximum pressure, in 1972.

Think that the code is crammed in 768 words: no room left in these 3 ROMS. Only one “no operation” at fixed address 00045 in ROM 0 (there is no key code “45”) ; you can’t move it, you can’t use it.

It was a kind of constant-sum game. For 2 instructions added somewhere (and that was the case with the exp(ln((2.02))) problem), 2 other instructions had to be removed, and in the same ROM!

You wrote you do not own an HP 35 ; in fact the algorithms evolved of course (Classic, Woodstock, Spice …), mainly on the precision issue. But the approach in the transcendental functions remained the same. Here, the name of Dave Cochran must be cited. He is the man who implemented Cordic in the 9100 and 35 calculators, based on the J.E. Meggitt’s paper, and made it possible.

I’d seriously consider that much of the decisions to use this or that approach to BCD handling used to be local, though. Laporte’s analysis, amongst other good ones, are very good references to be considered in these cases.

As for closing the post, forgive me not answering your questions… Instead, I added almost philosophy and a bit of history.


Luiz (Brazil)

Edited: 12 Apr 2006, 1:23 p.m.


Hi Cameron, guys;

forgive me adding a missing paragraph to the not that short post. I noticed now, at home, that I typed it in first, but forgot to add to the complete text. Please, include the following between the third and fourth paragraph:

About the negative sign for exponents of ten, we notice that in the Voyager series, the leftmost digit of the mantissa uses the same pattern: '1001' (BCD code for 9) if mantissa negative, '0000' if positive. I used to consider the third nibble (exponent signal) the same way, but now you wrote it as a three-digit BCD number. You see, I always used complemetary representation in binary integers. With this representation do you suggest considering it as complementary, BCD coded exponent? If so, I confess I missed that... d8^(((



Edited: 12 Apr 2006, 11:54 p.m.

Possibly Related Threads…
Thread Author Replies Views Last Post
  HP PRIME: derivative at a point Alasdair McAndrew 2 1,419 11-19-2013, 06:52 AM
Last Post: parisse
  HP-Prime exponents display Richard Berler 7 2,220 07-19-2013, 05:54 PM
Last Post: dg1969
  BCD, Binary, ...Binary Coded Duodecimal? mpi 7 2,124 02-20-2013, 04:28 PM
Last Post: Paul Dale
  HP28C/S ROM entry point lists update Christoph Giesselink 0 951 01-31-2013, 02:21 PM
Last Post: Christoph Giesselink
  HP-41 ROM Polling Point at FF5 Dan Grelinger 2 1,353 01-11-2013, 06:00 PM
Last Post: Monte Dalrymple
  complex exponents for e Ed Look 16 4,065 11-19-2012, 11:45 AM
Last Post: Ed Look
  33s Decimal Point Matt Agajanian 4 1,784 11-06-2012, 09:20 PM
Last Post: Matt Agajanian
  12C in 14 Masterpieces of Gadget Design Egan Ford 10 3,235 07-31-2012, 10:18 PM
Last Post: Palmer O. Hanson, Jr.
  14 flash cables mail today and a moratorium please? :-) Gene Wright 11 2,737 09-20-2011, 02:33 PM
Last Post: E.Lub_EU
  wp34S IEEE Floating-Point Conversions Jake Schwartz 12 2,733 06-08-2011, 10:46 AM
Last Post: Jake Schwartz

Forum Jump: