Hi,
Calculator have been using Binary Coded Decimal for 2 main reasons:
- precision: avoid frequent rounding issue of natural binary coding
- no conversion needed for user display
Now if calculator would be using binary coded Duodecimal (base 12), wouldn't we benefit from:
- better precision since fewer rounding cases than in BCD
- even better precision because for a same fixed number of nibbles, more bits are leveraged.
I assume this has been obviously looked at in calculator history, and yet the retained solution is BCDecimal.
Is display conversion such an issue that it outweighs all precision (and memory) benefits?
Thanks for any thoughts.