Hi,

Calculator have been using Binary Coded Decimal for 2 main reasons:

- precision: avoid frequent rounding issue of natural binary coding
- no conversion needed for user display

There's a memory representation drawback as 4-bit nibbles waste few bits.

Now if calculator would be using binary coded Duodecimal (base 12), wouldn't we benefit from:

- better precision since fewer rounding cases than in BCD
- even better precision because for a same fixed number of nibbles, more bits are leveraged.

...all this at the cost of necessary conversion for user input & display.

I assume this has been obviously looked at in calculator history, and yet the retained solution is BCDecimal.

Is display conversion such an issue that it outweighs all precision (and memory) benefits?

Thanks for any thoughts.

Using BCD doesn't increase precision and it doesn't reduce the number of rounding operations.

BCD does, however, allow purely decimal fractions to be represented exactly. Neither binary nor duodecimal can represent 0.1 exactly. This is kind of essential when dealing with money. You really want one cent to be that not something reasonably close to one cent.

It also avoids problems with numbers appearing to more digits than they really do. Taking, 1 + 2^{-20}, which will be exactly representable in almost any binary floating point format. It is also representable exactly in a sufficiently wide decimal format: 1.00000095367431640625. However if, for example, ten digits are displayed it will look like 1.000000954. Subtract one and extra digits magically appear: 9.536743164 x 10 ^{-7}. Subtract the leading few digits again and magically more digits appear seemingly out of nowhere. Of course, there is no way that this number of digits will be actually carried properly.

- Pauli

Quote:

BCD does, however, allow purely decimal fractions to be represented exactly. Neither binary nor duodecimal can represent 0.1 exactly. This is kind of essential when dealing with money. You really want one cent to be that not something reasonably close to one cent.

What we do in non-calculator (embedded systems) scaled-integer hex though is to represent a cent or a mill by 1, then a dollar is 100 (64H) or 1000 (3E8H), and they are absolutely exact.

*Edited: 18 Feb 2013, 7:36 p.m. *

Quote:

What we do in non-calculator (embedded systems) scaled-integer hex though is to represent a cent or a mill by 1, then a dollar is 100 (64H) or 1000 (3E8H), and they are absolutely exact.

Nice and easy trick. I learn something new here every day.

d:-)

*Edited: 19 Feb 2013, 2:34 a.m. *

Thanks Paul

Indeed Decimal is good at exact representations of 1/2, 1/5 & their compounds (hence limiting rounding issues with those, including 1/10), whereas Duodecimal is good for 1/2, 1/3 their compounds.

Those exact representations are convenient to compute exact values, at least in intermediary results.

Agreed exact representation of 1/10 is extremely important for many day to day operations (probably because we have 10 fingers).

Dividing by 3 is too (and 12 BTW in financial cases): one could argue that division by 3 happens more regularly than by 5 in computing.

Just for sake of intellectual curiosity, would a base 30 computing system (taking 2,3,5) provide any benefit?

(yes 5-bit nibbles would be a bad idea...)

*Edited: 20 Feb 2013, 3:55 a.m. *

Base 30 or base 60 would be an improvement in some ways. However, historically we don't expect exact results dividing by three.

Still you need to weigh off the base versus the number of bits required per digit. I know Hugh's decimal library uses base 10,000. The WP-34S uses base 1,000. BCD is simply inefficient. Base 30 and base 60 seem fairly efficient in this regard.

Would base 30 or 60 introduce other issues? Almost certainly -- numerical analysis has concentrated on bases 2 and 10 (and to some extent 16).

- Pauli

Thanks again for your patient explanations

Quote:

Hugh's decimal library uses base 10,000. The WP-34S uses base 1,000. BCD is simply inefficient. Base 30 and base 60 seem fairly efficient in this regard.

I did not realize such "big" bases were used by calculators: is Bit vs Digit efficiency sole motivation or are there other advantages of such bases (10,000 might seem less bit efficient than 1000 with that sole criteria)?

With this efficiency criteria, Base 210 would not be bad either (includes prime number 7) while keeping 8-bit alignment. :p

Quote:

Would base 30 or 60 introduce other issues? Almost certainly -- numerical analysis has concentrated on bases 2 and 10 (and to some extent 16).

Seems like a lot of fun for some R&D projects then!...should there be some potential benefit in precision

*Edited: 20 Feb 2013, 10:50 a.m. *

Bit efficiency is part of the motivation, additionally there is a performance gain for using larger bases. With base 1,000, each add does three digits. Four with base 10,000. Multiplications and divisions are also faster.

Of course this assumes a modern CPU.

- Pauli