I have a question about "Word Size" and Memory



#38

I am simply not well versed in the hardware aspects of computing.

But occasionally, I see limitations on functionality which seem like they must be hardware dependent. Examples:


1. 33s having a 255 character limit for an equation;
2. 12c platinum having problems with goto if the line number exceeds 254? (255? 256?)

Since 2^8 = 256, does this mean that such limitations are hardware dependent?

3. Switching "Modes" in casios, or the 30s, causing all memory to be lost.

How do I educate my self about this stuff efficiently? And about "words" "nibbles" etc?

Regards,

Bill


#39

It isn't really a hardware dependency. It has more to do with the design of the software. For example, most computers use the 8-bit byte, or octet, as the base unit of storage. Lets say that we want to design a data structure for storing text strings. One popular way of doing it is to allocate N+1 bytes to store a N byte text string. The first byte is used to store the number of bytes used by the string. The rest of the bytes contain the text. By allocating one byte for the length, we have limited the size of text strings to 0..255 bytes. If we needed to support longer text strings, we could use more memory to record the length. Two bytes for the length would support text strings of 0..65535 bytes. Four bytes for the length would support text strings of 0..(2^32 - 1) bytes. We have lots of choices for our data structure. Which one is the best? That depends on our goals. Using a single byte for the length is efficient in space and the time it takes to manipulate strings, at the expense of limiting the maximum string length. Using four bytes for the length allows for nearly unlimited string size, but it increases the memory overhead for each string and it takes additional time to manipulate strings. For a pocket calculator with limited memory and a slow CPU, one byte is probably the best choice. For a desktop computer, four bytes would be better. Each choice involves a complicated set of tradeoffs.


#40

I'll add that in the RPL models (28, 48, and 49 series), the
smallest addressable unit of memory is a nibble (4 bits or 1/2
byte) instead of a byte.

Memory addresses on these models are always 5 nibbles (20 bits), so they can
access up to 2^20 (1048576) nibbles (524288 bytes or 1/2 Megabyte) without bank switching.

#41

Hi Bill, guys;

Bill wrote:

Quote:
I am simply not well versed in the hardware aspects of computing.

About ten to fifteen years ago, I'd say conversely; nowadays, I am not so sure I can say that... But I think I can add a few words over John and James' elucidating posts.

Quote:
(...) 1. 33s having a 255 character limit for an equation; 2. 12c platinum having problems with goto if the line number exceeds 254? (255? 256?)
Since 2^8 = 256, does this mean that such limitations are hardware dependent?

I'd add that more than 256 possible instructions exist in the HP12Cp programmable repertoire, so some two-byte codes were added to the existing set. John's words corroborate this reasoning of mine, and I'd go a bit further: the information shown when [g][MEM] is pressed is how many program lines are used by programs, and we do not know if there are two-byte instructions in any of them. Based on this, if a 399-step program has all steps filled with these 'hypothetical' two-byte instructions, it would need 798 bytes instead of 399. Something to think of...
399 bytes

Quote:
3. Switching "Modes" in casios, or the 30s, causing all memory to be lost.

I found myself confused with a particular memory count in the HP9S that I'd like to share with you, contributors, and know your thoughts about this. When in STAT mode (the HP9S allows one-variable statistics only), the HP9S can hold up to 80 different entries with up to 255 occurrences for each, provided that final count does not exceed 20400 entries. Well, what amuses me is the fact that the individual entries are editable! If you enter 80 different values and want to change each of them, you can do that. O.K., there is a limit of 80 full-precision values, but I'd ask how many times did any of you analyze a sample with so many items. Entering each data is tiresome, I know, but having each of them to edit later is amazing. All RPL-based calculators (plus some top-line financial) use lists contents as statistical data. The HP42S has a 'pseudo' list handling, but it actually allows a [n×1] or [n×2] matrix to be the argument for [SIGMA+]. This way, editing the matrix and entering it again is faster than editing the summation data with [SIGMA-] and [SIGMA+]. Back to the HP9S: it has memory "space" that is enough to hold up to 80 statistics entries, but it has only five "memories" when it is in normal mode. Yeap: the extreme case is when the HP9S is set to Complex mode (weird, somehow weird) and you already have an intermediate complex number stored in [a] and [b] and want to add/subtract/divide/multiply to a new complex: it needs to hold real and imaginary parts of both complex numbers until [ENTER] is pressed, and it needs four registers. The fifth one is [M], that can hold an extra value. Well: what are the other 75 registers that MUST exist to hold statistical data holding when the calculator is not in statistical mode? Opposed to the HP30S, the HP9S has no historic stack to hold previous operations. Any clues? I guess I have one. I got used to "see" all "user memory" in RPN calculators since I started using an HP41. Untill I begun dealing with digital electronics and was only a calculator user, I was not 'completely' aware of the fact that somehow the algebraic calculators needed extra memory to hold a stack of numbers, operators and pending parenthesis. This extra memory cannot be used unless you disable the algebraic pending operations and free this memory. Well, when the HP9S is in STAT mode, parenthesis do not work the same way as they do in normal mode...

Quote:
How do I educate my self about this stuff efficiently?

I think each new non-RPN calculator, from HP or any other brand, has its own operating characteristics and in some cases, they must be mastered one at a time. Some wise, thoughtful and very respectable contributors in here sometimes point RPN deficiencies that actually exist, but I particularly take RPN as the most "transparent" interface when user memory is the issue. Neither RPL nor algebraic interfaces allow this "memory virtual viewing". RPN calculators allow the user to handle up to four numbers in a four-register stack structure plus a fifth "error-recovering" register; the remaining memory is divided into registers to hold numbers or, in some cases, alternative data (ALPHA strings, matrix descriptors, program steps, etc.). Its operation is unambiguous and the stack contents can be predicted for a given keystroke sequence: one does not need to "press the keys" to "see what happens". On the other hand, after keying in some numbers, pressing operation keys and opening a few parenthesis, I may have no idea of how much memory an algebraic calculator has already spent to hold the pending operations, but I know that I cannot use this same memory for other purposes, even if I have no need for pending operations. I guess this is the main reason for the "memory loss" you mentioned before. STAT data and pending operations are not compatible, so this "shared memory" must be cleared when modes are changed.

Quote:
And about "words" "nibbles" etc?

The HP32S was the first calculator I saw that counted X.5 bytes for programs. This .5-byte is actually a group of four bits, named "nibbles". Each time half of a byte is used as stand alone or composing a group of bits associated to some sort of data, naming these four bits a "nibble" is faster and easier. At least I guess so. The term "word" related to digital information and memory has some different "interpretations" and applications. I have already seen "word", "double word" and the like referring to 16-bit and 32-bit data respectively. I also saw "word" as a reference to the standard data a system (or processor or memory) can handle (or process or store) at the same time, being the "system data unit". This last definition does not state how many bits a word has, it depends on the system (processor, memory) structure. I explain this everytime I am asked about and everytime I spontaneously mention this fact. I still don't know what is correct, if any.

I guess I wrote too much.

Cheers.

Luiz (Brazil)


Edited: 28 July 2004, 2:37 a.m.


#42

To quote someone else: "the wonderful thing about standards is that there are so many to choose from." (that's an IT joke; apologies to the engineer community).

About the only three terms that we can call unambiguous have been mentioned by John. James and Luiz: octet, byte and nibble (or nybble). As has been said, the first two describe a logical collection of 8 bits and the third is half that size. I say *logical collection* because none of these terms tell us anything about the way the bits are generated, moved, manipulated or stored by a piece of hardware.

The term WORD usually means the logical collection of bits that a given (processor) architecture can sling around most efficiently. You can think of it as an architecture's data atom. You can inspect a word's sub-structure and you can "weakly bond" more than one of them together but they remain the fundamental unit that an architecture is designed to process.

Double-words and quad-words, as their prefixes imply, are simply collections of bits that are comprised of 2 and 4 atoms (words). Some architectures are able to manipulate these multi-word quantities more efficiently than arbitrary collections of words which is why there are specific terms that apply to them.

Higher up the descriptive food chain we have much more ambiguous terminology. Terms like int, short and long spring to mind (only because I often program in C++). The size (in bits) of these is only defined for a particular platform and can, as Microsoft and Intel have shown us) change across incarnations of the (allegedly) same platform.

I often refer people to the Free On-Line Dictionary of Computing as both a reference and a fascinating interconnected web of computing terminology.

Cameron

PS: Luiz you never type too much. ;-)


#43

Quote:
About the only three terms that we can call unambiguous have been mentioned by John. James and Luiz: octet, byte and nibble (or nybble).

The first time I saw the word 'octet' used was in the internet RFC docs. I assume that term was used to differentiate an 8 bit value from a 'byte' which could be a 7, 8, or 9 bit value.

I know some mainframes have a 9 bit byte. I don't know if any computer used a 7 bit byte, but in an 8 bit system with one bit used for parity, you only have 7 bits left for data.

So... byte is not such an unambiguous term. However, whenever I use it, I mean 8 bits.

Steve.


#44

Some older computers used 6-bit bytes. I've seen computers where the smallest addressable chunk of memory is 32-bits or 64-bits. If UniCode ever becomes universal, we may see computers with larger byte sizes.


#45

I guess we're left with just the unambiguous 'bit', then.

Steve.


#46

Quote:
I guess we're left with just the unambiguous 'bit',
then.
"Octet" also seems unambiguous to me.

I overlooked that bytes are sometimes not octets, but I believe
that that is indeed the case.

So is a nibble (or nybble) alway 4 bits or is it 1/2 byte? Maybe I
should say "quartet"?

Regards,
James

Edited: 28 July 2004, 4:34 p.m.


#47

Quote:
"Octet" also seems unambiguous to me.

Yes, that was a failed attempt at humour.

I was suggesting that only the very simplest things can be defined with absolute accuracy.

If I ever invent a technical term, I hope it's at least as amusing as 'nibble'.

Steve.

#48

Well, we may enter the almost endless discussion about the difference between a "binit" (binary digit), which is something like a memory cell, disk spot, etc., that have two possible states (1 and 0); and the real, purest definition of "bit" as a quantity of information: the amount of information needed to reduce uncertainty of an information source by a half. This last definition comes from Information Theory.

For instance, a circuit, a magnetic spot, an optical mark with two possible status will have the "potential" to carry up to one bit of information; but if such information is already known, and hence redundant, our "binit" will find itself carrying less than one "bit" of information.

The usual metaphor is that a "binit" is like a bottle which can contain up to one "bit" of information (content); but not every bottle carries its maximum load all the time...


#49

I've never heard that about the binit and the bit. I've been at this gave for over 20 years. We were taught in school in the early 80's that "bit" was short for "binary digit," a definition which I find adequate.


#50

I accept I went a little off with this matter because, all the time, all we (me included), use the "popular" definition of "bit".

I was an electronics engineer student in 1980, when I attended a course on communications systems and information theory. I think (from memory) we used a classical book by Norman Abramson, which covered these subjects; and also presented the main ideas from the Nobel laureate Claude Shannon (father of Information Theory, the scientific approach to information).

We had very heated discussions with the professor, because the "binit" was known by none of the students. But, in the end, he convinced us about the difference between the "binary digit" and the "amount of information" it may bear. I think that, to avoid the confusing statement "Not always a bit is worth a bit", they switched to "Not always a binit is worth a bit".

There were enlightening discussions about the "entropy" of an information source, redundancy, coding efficiency; the differences between "bit per second" and "baud rate", error correcting codes, etc.

Most of those theoretical issues of the '50s were subjects of scientific research between the '60s and the '70s; become critical features of consumer products in the '80s and '90s (fax, CD players, hard disk drives, DVD, dial-up modems); and are now "taken for granted", passing almost unnoticed by millions of users of PCs, communication devices, and the Internet.


#51

I've never heard of a "binit" before, but I suppose it depends a lot on how the potentially available information is being used. For example, a nibble has 16 possible values, but when used as a BCD digit, only 10 of them are meaningful, and when used as a sign, only 2 of them are meaningful.

I wonder how a processor being used in decimal mode would respond to invalid nibbles.

Regards,
James

Edited: 30 July 2004, 7:03 p.m.


#52

Just to benefit from your well chosen examples:

A "physical" nibble (4 "bits" or binits) will contain:

- 4 bits of information when used to keep an hexadecimal digit.

- 3.322 bits of information when used to keep a BCD digit (isn't it reassuring to see that 3 bits are not be enough for this?). This is kind to have four bottles, where three are full, but the last has been filled up to only one third of its capacity.

- 1 bit of information, when used to keep just a two-valued sign; no matter how much circuitry is used for that matter.

The bit scale is logarithmic: In the case of the BCD digit, the information content is calculated as the base-2 logarithm of 10 (the permitted values count). The information content of the "sign" is just the base-2 logarithm of 2, which is 1 by definition.

#53

For the Saturn processor, a "W" word field is used to refer to an
entire 64-bit (16-nibble) "working" register or "scratch"
register.

These 16-nibble registers are sufficient for holding "real"
numbers in decimal mode, where each nibble represents a BCD digit;
nibbles 0-2 for the "X" (signed) exponent field, nibbles 3-14 for
the "M" mantissa field, and nibble 15 for the (0 for positive, 9
for negative) "S" sign field. Interestingly (to me), an entire
nibble is used for the sign, although a single bit would be
sufficient. But note that the "long real" numbers used internally
need 15 nibbles for the mantissa, 1 nibble for the sign, and 5
nibbles for the signed exponent, so can't fit into a single
register.

The Saturn processor has several 20-bit registers, and the 64-bit
registers each have a 20-bit "A" ("address") field, nibbles 0-4 of
these registers. The SysRPL terminology for a 20-bit unsigned
integer is "internal binary integer", or "bint" (not the same as a
UserRPL binary integer, which is a "hex string" or "hexs" in
SysRPL terminology). Note that bints are used in SysRPL for many
purposes besides memory addresses; in general where they're always
sufficient to describe a quantity. For example, a bint is always
sufficient to specify the length (in nibbles) of any object that
can possible fit into memory, any address, or any length (again,
in nibbles) of memory.

Other Saturn registers are 16-bit, 12-bit, 4-bit, and 1-bit.

Note that the Saturn processor is based on a 4-bit bus, so 4-bit
quantities ("nibbles") are particularly important. A nibble is the
smallest addressable unit of memory, and any memory operation
always involves a whole number of nibbles; you can't read or write
partial nibbles. Also note that a nibble represents one
hexadecimal digit, or, in decimal mode, one BCD digit.

Regards,
James

Edited: 28 July 2004, 11:03 p.m.

#54

I think the word "byte" originated with IBM for their 360/370 series computers (mid- to late-1960s). At this time, as others have already noted, a computer "word" could be almost any size, depending on what brand of computer you were talking about.

An 8-bit byte was the right size for one character as far as IBM was concerned, since it allowed all 256 characters which comprised the extended ASCII set - "EBCDIC" (extended binary-coded decimal information(?) code(?) ) as IBM called it, if I remember correctly.

I suspect, but don't know for sure, that "byte" was a play on words: it was clearly more than a bit, and you bit off a "byte" to chew (process). Hence, the logical successor: the nibble ( = half a byte/bite ).

Thus, originally at least, byte meant exactly 8 bits, and a nibble was 4 bits. If you need parity for error correction, then adding a parity bit to each byte made it 9 bits in size. This was certainly done when the IBM 360/370 computers wrote tapes - hence the "9-track" tape drive, with 8-bit bytes plus a parity bit laid down across the tape. Previous tape drives were 7 track (and I have no idea how they formatted the bits!). This made exchanging tapes between different computer systems somewhat of a challenge in the late 60s, early 70s!


#55

I've used old UNIVAC systems that had 7-track tape drives. They used a 6-bit (FIELDATA) character code that predated ASCII and EBCDIC. Seven tracks was a good match for a character plus a parity bit.

CDC and DEC also used 6-bit character codes on some of their older systems. Some systems, like the DEC PDP-10, a 36-bit system, supported more than one size of character. You could use the character set that was the best fit for your application.


#56

John posted,

Quote:

I've used old UNIVAC systems that had 7-track tape drives. They used a 6-bit (FIELDATA) character code that predated ASCII and EBCDIC. Seven tracks was a good match for a character plus a parity bit.


I can relate to that... In the mid-eighties, I maintained software written in Fortran '66 and Fortran '77 for Sperry and Sperry*Univac mainframes.

The Fortran '77 software (which Sperry caleed "ASCII Fortran") utilized the ASCII character set. The Fortran '66 software ("Fortran IV") utilized FIELDATA. IBM mainframes of the era utilized EBCDIC.

-- Karl S.


#57

You can't forget Control Data and their 6 bit characters (packed 10 per word). They uses an escape character (76 octal) to handle lower case.

#58

Finally, a subject I know too much about!

EBCDIC: Extended Binary Coded Decimal Interchage Code

On a related note, one of my favorite bits of computer humor is "EBCDORK: Extended, Binarily Coded, Decimally Organized, Rbitrary Kludge". IIRC, I read it in Tom Nelson's "Computer Lib/Dream Machines" book.


#59

I was just wondering "wonder where Paul is?", and was about to send you note to ask if you're OK.

Glad to see you posting again. Don't be a stranger.

Matt

ps. Somebody here was asking how to take apart a 49G+, and I thought of the photos you took of your 49G+ "under the knife".


#60

Thanks for the kind words.

Dave has graciously taken on hosting those images, and they're available via the MoHPC Articles Forum, in Article 408. (In case no one mentioned it at the time.)

#61

Generally, limits on the number of characters allow (1.e.255) came from the way the operating system treats character arrays and how it handles the pointers for such arrays. The origional 255 character limit came from the fact that programmers using the early microprocessors were faced with size limits. It was less wasteful and easier to process the string pointer if it would fit into one memory word--and since most common processors were using 8-bit words this mean 255 characters max (2^8=256 which equals 255 characters+1 index byte).

In fact in some earlier (assembly or higher level) code you only got 253 characters: 1 index byte, 253 characters, and 2 null (0) bytes to indicate "end of string".

In calculators, often using serial memory bus structure, things were more efficient if you could keep pointer structures small--moving 8 bits was a lot faster than moving 16 or more.

In point of fact the 33s "shouldn't" have such a limitation as the ARM cpu can handle much more memory than the origional 33 and I believe has a larger word size. But--since it appears that the "33S" is really code running on an emulator running in ARM microcode it simply carries over the old limitations.


#62

Hi David,

That is a good expmanation for the 255 character business.

however, I do believe that the 33s is *not* an ARM CPU--rather, it is some other, older CPU--there is a thread in the archive, and in C.S.hp48 about this.


Regards,

Bill


#63

I was just looking for that information yesterday. It uses a SunPlus SPLB31A. The manufacturer's web site wouldn't let me download the spec sheet, but it did say that it was an 8-bit CPU with 256KB ROM and 4288 bytes of SRAM. It also has an integrated LCD controller.


#64

CPU is shown as an "8502" at:

http://www.hp.com/calculators/scientific/33s/specs.html


#65

That's interesting. The 8502 is listed as being an enhanced version of the extremely popular MOS Technologies 6502, used in the Apple II and many Commodore computers. It's also listed as the CPU in the HP-30S, a low-end algebraic scientific calculator.

It's amazing how long some of these CPUs have been around. I can remember lusting after a 6502 based single board computer (KIM-1) when I was a teenager. It had a hex keypad and six 7-segment LEDS. It was a real computer, minus case and power supply, for less than $300.


#66

It's amazing how long some of these CPUs have been around. I can remember lusting after a 6502 based single board computer (KIM-1) when I was a teenager.

What amazes me is that HP dropped further use of their excellent Saturn CPU (for this kind of applications, like calculators and BCD arithmetic) in favor of CPU that is basically 6502. It seems like Saturn is too old for their new calculators but, as far as I know, 6502 is much older and all this doesn't have much sense from my point of view. If they kept Saturn, they wouldn't have to rewrite everything and produce a lot of bugs and differences. Furthermore, Saturn has enough power for some HP-15C, HP-16C, HP-41C/CV/CX, HP-42S and HP-71B based calculators in the future.


#67

A few people from HP have said in the past, that fabbing a new CPU would cost at least half a million dollars - too much of an investment.

The 6502 is fine for a low end scientific, and ARM makes sense for the more powerful calculators. The ARM is fairly cheap, well supported and low power. It is also much faster then Saturn - The main preformance limitation of the Saturn is the 4 bit bus, while the 49g+ uses a 16 bit bus. Also, there are many more better development tools for chips like the ARM then the saturn. With an industry standard CPU, HP can easily get faster chips later without another huge investment in refabbing the Saturn yet again.

123 to delete


#68

I haven't compared Saturn to ARM but to a 6502. And Saturn doesn't need to be developed (because it was developed about 20 years ago) but just to stay in production. Saturn is a good CPU and would serve future HP calculators very well.

BTW, we all know the "quality" of HP's last experiment with ARM CPU. It is called HP-49G+ ...

The problem with the latest HP calculators isn't 4-bit data bus but the lack of overall production quality.


#69

One problem is that HP may not be able to make new Saturn chips without spending a lot of money. Integrated circuit production lines don't last forever. They become obsolete and are scrapped. Even if you have all the original masks and layouts, switching to a new production line, using a more modern process, may require expensive revisions to make the old design compatible with the new production line.

The SunPlus chip that HP is using in the HP-33S is not just a 6502. It's an enhanced 6502 core with integrated LCD controller, ROM and RAM. That's a substantial gain in integration and functionality over the Saturn, at the expense of losing software compatibility. HP may have decided that it was better to rewrite the software for the 6502 if it meant that they could use an off-the-shelf, inexpensive, all-in-one chip for their calculator.


#70

<quote> The SunPlus chip that HP is using in the HP-33S is not just a 6502. It's an enhanced 6502 core with integrated LCD controller, ROM and RAM. That's a substantial gain in integration and functionality over the Saturn, at the expense of losing software compatibility. HP may have decided that it was better to rewrite the software for the 6502 if it meant that they could use an off-the-shelf, inexpensive, all-in-one chip for their calculator. <end quote>

AFAIK, none of the two hundred million 6502 cores being produced every year today are limited to the instruction set or slow speeds of 25 years ago. They all have an enhanced instruction set, and most of the cores are in microcontrollers that have varying amounts and types of RAM, ROM, uP support, timers, and I/O, all in one IC. Western Design Center licenses the IP.

Edited: 1 Aug 2004, 2:31 a.m.

#71

"And Saturn doesn't need to be developed (because it was developed about 20 years ago) but just to stay in production."

HP no longer make the Saturn - it was made my NEC for many yearsm until the production line became obsolete. A process shrink isn't easy either - the saturn is integrated on a mixed signal IC, the yorke - the analog sections mean a process change is tricky.

"Saturn is a good CPU and would serve future HP calculators very well."

It is good because the software doesn't need to be changed, but I seriously think modern CPUs offer much better MIPS/Watt


"BTW, we all know the "quality" of HP's last experiment with ARM CPU. It is called HP-49G+ ..."

What does the choice of CPU have anything to do with the poor quality of the keyboard?


#72

What does the choice of CPU have anything to do with the poor quality of the keyboard?

Nothing, really ... except that we still aren't sure if the keyboard problem is due to a bad keyboard or a firmware/ARM/emulation problem (recently there was a discussion about this on comp.sys.hp48).

#73

Quote:
And Saturn doesn't need to be developed (because it was developed about 20 years ago) but just to stay in production.

ASICs can't just "stay in production". The volume is low enough that the foundry (NEC for the Saturn parts used in the HP 48 etc.) eventually discontinues the old process in favor of newer ones. Thus to "stay in production" you need to tapeout the chip again. Even if you're using logic synthesis from Verilog or VHDL, a tapeout still costs a *LARGE* amount of money. Thus it is unsurprising that HP would prefer to switch to commercially available parts from Sunplus and Samsung.

About a decade ago I worked for a company that had a fairly successful product that had been on the market for about 7-8 years. It used an ASIC fabbed by Fujitsu. By that time, the process was so old and volumes were so low that Fujitsu only did a line start every few months. When you do a line start, it takes a while to get the process parameters right, so a lot of waste is produced before anything useful. Still, Fujitsu was willing to try to make our chip. But unfortunately things were so old that two consecutive attempts at a line start resulted in zero yield. We had to scramble to replace the ASIC with a daughterboard with an FPGA.

#74

Although quite some time has passed since I started this thread, I just wanted to let all those involved in its development know that I appreciated the answers--very interesting!

Best regards,

Bill


Possibly Related Threads...
Thread Author Replies Views Last Post
  Prime: Program size limited to 64K? Erwin Ried 4 430 11-17-2013, 11:42 PM
Last Post: Joseph Ec
  Prime: size display bug when editing large programs BruceH 2 293 10-31-2013, 05:30 PM
Last Post: BruceH
  HP Prime SIZE and OBJ-> with matrices/vectors/lists Helge Gabert 8 592 09-27-2013, 05:44 PM
Last Post: Helge Gabert
  OT: How to get data from WinXP to a full-size PCMCIA SRAM card Gene Wright 10 613 06-25-2013, 08:03 PM
Last Post: gene wright
  41 User Memory vs System Memory Gerry Schultz 6 529 07-01-2012, 12:02 AM
Last Post: Monte Dalrymple
  [WP34S] Stack size impact on speed? SSIZE4. Chris Tvergard 13 753 05-13-2012, 11:42 AM
Last Post: Chris Tvergard
  HP 30-B size Question Jeff 1 173 01-30-2012, 07:46 PM
Last Post: Mike Morrow
  Matrix functions on the WP 34s Build 1685: in a word "incredible" Gene Wright 78 2,839 10-06-2011, 09:16 AM
Last Post: Crawl
  N Size Cell Charger Gerry Schultz 3 326 06-17-2011, 12:14 PM
Last Post: Gerry Schultz
  Wp34s: word size lost? Cristian Arezzini 5 349 05-26-2011, 12:29 AM
Last Post: Cristian Arezzini

Forum Jump: