[LONG] Signed integers: HP-16C vs. "the rest"



#2

Fellow calculator enthusiasts --

Recently, I was using Microsoft Word macro, when a Microsft Visual Basic error message came up:

"Run-time error 80004005 (-2147467259) ....."

I fixed the problem, but the simple math exercise intrigued me. It clearly looked like an eight-digit (32-bit) hexadecimal code converted to a signed decimal-integer equivalent. This task is basically the extraction of a typical C-language "long int" variable. How efficiently can various calculators with built-in binary ("base") conversions obtain the desired result? My findings may be disconcerting to the reader...

First up: the "gold standard"

HP-16C

The user hits DEC, enters "32" and hits WSIZE to set a 32-bit word. Then, the user can choose "2's" complement, and verify these settings using STATUS. Then, the user hits HEX, enters "80004005", hits DEC to convert and reads,

-47467259 .d   (in Window 0)
-21 d. (in Window 1)

for a complete value of -2,147,467,259 -- the correct result.

Granted, it's a minor annoyance that the output is displayed in two parts. However, eight digits is a digestible size for "chunks" of long integers, and the user is given exacting control over the format of data. The "DEC" mode offers a true integer range up to 19 digits.

VERDICT: Rigorous and trustworthy.

Pioneer models:

On a 32SII, keying in "80004005" after selecting HEX, then converting with DEC yielded 2,147,500,037. That's the correct unsigned integer, but not the correct (2's complement) signed integer.

Experimentation (or RTFM) reveals that the Pioneer models use a 36-bit 2's-complement integer representation. This makes perfect sense with the 12-digit display, allowing full 12-digit octal and 9-digit hexadecimal. Decimal conversions return to normal floating-point mode (not decimal integer), but the results will not exceed 11 digits. Up to 36 binary digits can be entered or converted, displayable by scrolling 12-digit chunks.

However, 36 bits is not the same as 32 bits. To properly represent this particular complemented, signed integer requires that the hex value be preceded by an "F". This provides a sign bit and three leading complemented bits in the correct positions.

HEX, "F80004005", DEC yields -2,147,467,259.00 on the 20S, 27S, 32S, 32SII, and 42S (and probably the 22S?). The 33S performs the same way.

VERDICT: The approach is not unreasonable, but the user must take caution to avoid incorrect results.

RPL-based models

I tried the 28C, 48G, and 49G.

The user can set word sizes from 1 to 64 bits using "STWS", but the values are treated only as unsigned. Thus, the RPL-based models cannot automatically handle signed integers. Also, these calculators will accept extra input digits beyond the word boundaries, then simply discard the superfluous high-order bits. The HP-16C and Pioneers do not accept invalid input.

After entering/selecting "HEX" and setting the word size if desired, the user must precede the entered value by a "#" and enter it onto the stack from the input buffer.

Even after setting the word to 32 bits, entering "#80004005" in HEX mode, then entering/selecting "DEC" gives the incorrect result of "#2147500037d" due to unsigned-integer representation.

The HP-48 and HP-49 models allow the user to do a "change sign" on a binary integer to get a 2's-complement conversion, but the HP-28C doesn't -- the user must do a "NOT", then add 1. Either of these approaches will give the result "#2147467259d", which is correct except for the missing sign.

The user would have to write an RPL program to automate this type of conversion.

VERDICT: Awful! Some simple problems are made complicated -- the functionality is there, but it's not implemented thoughtfully.

Non-HP "cheapos"

I also have several low-end calc's with base conversions built in -- a Casio fx-115MS and a Texas Instruments TI-36X. Both of these models are NCEES-approved for standardized EIT/FE and PE exams.

Each of these calculators limits the size of arguments in the respective bases by their 10-digit display length, not by bit count. So, on both calculators, base-2 binary integers are 10 bits, and base-8 octal integers are 30 bits. Base-16 hexadecimal integers are 32 bits on the Casio, and 40 bits on the TI. Integers are represented as signed 2's-complement on both.

Observing the 40-bit word length, the TI returned the correct -2147467259 when provided "FF80004005" as HEX-mode input, converted to floating-point using DEC.

Almost by happenstance, the Casio handled this particular problem most adroitly of any unit I tried. Its fixed 32-bit hexadecimal, 2's-complement signed integers and 10-digit display made it ideally suited for the task. "80004005=" entered in HEX mode and converted with DEC yielded -2147467259. Other values (e.g., 32-bit unsigned integers or shorter signed integers) would not be converted as conveniently; adjustments to input or output would be required.

The different word sizes can produce curious results. For example, -1 decimal on these units displays as octal "7777777777" or binary "1111111111", suggesting that a string of 30 bits can be represented as a string of 10 bits! Another consequence: what is representable on these calculators as hexadecimal might not be representable as octal, decimal, or binary. Attempts to convert "downward" a hexadecimal number within the uppermost range of representable values can produce a non-recoverable and bogus message of "Math Error" (Casio) or "Error" (TI).

One more thing: the Casio will simply return "Math Error" upon buffer entry after blithely accepting superfluous input digits.

VERDICT: The unwitting schoolkids and testees could be flummoxed.

------------------------------------------------------------------

In summary, let's acknowledge the fine and distinctive HP-16C, which also recognizes unsigned and 1's-complement signed integers (which were not yet antiquated in the 1980's).

Comments, experiences?

-- KS


Edited: 27 Aug 2005, 3:17 p.m. after one or more responses were posted


#3

Hi, Karl;

first of all, thanks for the so well written post. It is complete and allows us to see the 'great picture'.

The only actual addition of mine would be the following. One of the subjects I`m (gladly!) teaching again is 'Scientific Computing', that mainly deals with representing and handling floating point numbers with computer programs. Number representation (BCD and IEEE 754), error handling, transcendental functions calculation and specific math-related programs are in the menu.

Precisely this week I was showing them what do 1`s and 2`s complement representation are, and the first idea I brought to their minds was `How to represent a negative quantity by using binary numbers?`. Some of them ususaly ask me: `Is it possible to add a minus sign in a binary number?`, then I begin by defining word size, most- and least-significant bit, then binary addition and the concept of a negative representation. The single binary complement allows one representation for zero and another representation for its complement, hence the `-0` decimal equivalent. So, by using the 2`s complement representation, the system gives us one representation for zero and one 'extra' negative representation. Anyway, by using 2`s complement representation we can easily `see` negative quantities represented by binary numbers after subtracting.

Yes, I know that all of these facts are well known by those who deal with binary arithmetic and the like, but what I`d add as an experience of mine is that I use the HP16C User's Guide contents and examples as the main text for these particular classes, and I show both calculator and user's guide to the students (it always happens that someone tells `I was not known about this version of the HP12C! Where can I buy one like this?`. Not that tears come to my eyes when hearing this kind of question, but I actually feel sad for them...). And I also let clear to them that 1`s and 2`s complement representations are specific decimal representations that allow us, humans, to actually `see` negative numbers where there are only ones and zeroes, and that the HP16C actually holds internal binary representation even when showing the decimal negative complement, i.e., the binary number is not changed, only the decimal representation follows the negative complement convention. Neither binaries, nor octals nor hex numbers are shown with a minus sign, instead they must contain all leading bits till the MSB when they represent negative quantities when using 1`s or 2`s complement representation.

It is not too often that some students take some extra, significant time and many questions to be 'clicked on' to these concepts.

Just my 2¢.

Cheers.

Luiz (Brazil)


#4

It's important to remember that the various internal representations were done many many years ago, when both computing power and memory were vastly more expensive. Would we use the same representations if we were starting from scratch today?

IBM's (and others) use of BCD in hardware suggests not. Even when computing was expensive, some thought that the internal representation should be closer to the way humans think about numbers. Modern languages give us support for arbitrary precision integers, fractions, etc. Perhaps modern hardware should do the same as well.

Despite the clear efficiency of using 2's complement binary math, we could do better today, trading some performance for more representative numbers.

Still, a great post! Thanks for the education.


#5

Hi, John;

your question:

Quote:
Would we use the same representations if we were starting from scratch today?
I think I cannot dare answering such question because of some reasons I see right now.

As for what I know about today, computers usage differ from what they were mainly designed for: an aid for science and business in such a way problems in these fields could be faster and reliably solved. Now we have so many fields where computers are more than an aid, instead are the main reason for their existence, that it is somehow hard for me to think of a straight answer.

In fact, if we do not think of any particular development fields, binary arithmetic is faster than floating point. Anyway, in applications were building 3D images in real time with high resolution is the main concern, like 'games and fun', then we can count on about 8GHz core processors commercialy available, and I can't imagine any software that would be slow if running in these machines (add a low-cost 2-GByte RAM and we're done, here). So, your question is pertinent, and daring to answer goes to the hands of todays digital designers.

Any reading these posts?

Cheers.

Luiz (Brazil)

#6

IBM had a very good reason for using BCD on their business-oriented computers. Think of a typical job, payroll. It reads in a sequential file of punch cards containing employee ID and hours worked, computes wages and deductions, updates a master file, and prints thousands of checks. All I/O is fixed-record-length with data fields in EBCDIC or Hollerith code. Hardware multiply/divide, if present, is slow and avoided whenever possible. By using BCD, input and output format conversions are limited to conversion between a character code (EBCDIC or Hollerith) and BCD, which is simple and can be implemented in hard-wired logic. The system is optimized for very fast and efficient I/O, not number crunching. Throwing binary integers into the mix requires expensive conversions between strings in character code and binary integers for every record that is read or written. All of the peripherals are dumb by modern standards, relying on hard-wired logic, not microprocessors. They are incapable of converting between binary and character codes. Besides efficiency, BCD arithmetic avoids problems with errors introduced by using binary numbers to represent decimal numbers.

#7

I know I am going to be flamed, but here it is:

You can use the Apple OSX calculator (which has an RPN programmer option!)

The wordsize is 64bits so you can either enter 0xFFFFFFFF80004005 in HEX mode
and press DEC to see -217467259, or press the decimal number and then press
HEX to see the 0xFFFFFFFF80004005

You can even select a binary display to see the binary representation of the number in addition to the HEC/DEC/OCT
display

**vp


#8

Vassilis --

No, certainly no flame from me...

I'm sure there exists much more-capable computer-based software to perform computer-science computations. It's probably why the full functionality of the 16C was never provided in any replacement
HP calculator (as my comparison aptly showed).

The Mac OS-X calculator in an Apple desktop or laptop is not very portable, and the word size and complement may be fixed.

Is there any capable computer-science software for a PDA? Such a software package should also be able to convert text in various formats.

-- KS


#9

I was in Wal-Mart today and checked the calculator display. I'm waiting and hoping for a sale price on the HP-33S but it hasn't happened yet. Then I spotted the LeWORLD Scientific Calculator priced at $4.23 ! At 2.75"x5"x0.5" thick it is truly shirt pocket size. It has all the usual scientific functions including hyperbolics. And, it has a base conversion capability with a forty bit hexadecimal display. To do the conversion discussed in this thread one simply has to press 2ndF HEX, enter FF80004005, press 2ndF DEC and -2147467259 appears in the display. No messing around with two's complement or word length commands. What could be easier than that? Back in the early sixties when we did conversions using a Friden we would have sold our soul for a capability such as that.

One entry in the thread asks "Would we use the same representations if we were starting from scratch today." Absolutely. Without the benefit of fifty years of development I can't see how we could do anything differently.

Here's a question related to a frequently discussed subject in the Museum. If LeWorld can deliver that much power in a scientific calculator for less than five dollars today then how long will it be before they can offer the capability of an HP-15C for less than twenty dollars?


#10

Palmer stated,

Quote:
And, it (the LeWORLD Scientific Calculator priced at $4.23) has a base conversion capability with a forty bit hexadecimal display. To do the conversion discussed in this thread one simply has to press 2ndF HEX, enter FF80004005, press 2ndF DEC and -2147467259 appears in the display.

That sounds just like the TI-36X I described in my original post, or even a modern TI-30. I'll bet that it has the same limitations and pitfalls, too...

The user (probably a secondary-school student) must know that the hexadecimal word size is a fixed 40 bits, and must pre-pend enough "F"'s to fill out the word, if and only if the hex code represents a signed integer and the first hexadecimal digit is >= 8. Otherwise, don't "F-bomb" the input. ;-)

Quote:
No messing around with two's complement or word length commands. What could be easier than that?

Well, suppose that the input is not a 2's-complement, 40 bit word. The mathematical or logical operations might be erroneous, or the user could enter invalid input (e.g., too many digits).

Quote:
Back in the early sixties when we did conversions using a Friden we would have sold our soul for a capability such as that.

That I can imagine. However, 20 years later (in 1982), one could have obtained the excellent capabilities of the HP-16C for less than $150, and retained his mortal soul... ;-)

-- KS


#11

Karl wrote "... 20 years later (in 1982), one could have obtained the excellent capabilities of the HP-16C ..." In 1977, five years before the advent of the HP-16C the TI Programmer had provided many of the same functions. I couldn't afford a TI Programmer but by 1979 I was using a Programmer Simulator which would provide the same capability on the TI-59. Hexadecimal operations were tortured since the TI-59 didn't have an alphanumeric display. That wasn't a big problem for me because most of the work I was doing at the time used octal not hexadecimal.

A historical note: Mier-Jedrzejowicz's Guide to HP Handheld Calculators and Computers notes on page 46 that the HP-16C was "...intended at least in part to compete with the TI Programmer calculator ..."


#12

And before the TI Programmer, in 1973 TI introduced the SR-22, a desktop calculator that offered binary, octal, decimal, and hexadecimal modes. It did not have logical operations (e.g., OR, AND, XOR, NOT) though. But it did have floating point in all four bases, which the Programmer and HP-16C did not have.


#13

I found some old correspondence with Gene and with Viktor Toth on the subject of base conversions. They told me that the HP-65 had a built-in conversion capability and that the HP-67 had a conversion capability in Math Pak 1. I don't have either the HP-65 or the Math Pak 1 in my collection so I can't confirm that.


#14

The hp65 had built in decimal to octal and octal to decimal.

There were many programs written for the hP67 to allow fairly arbitrary base conversions from any base to any base. Some were even small routines that showed up in the HP Key Notes magazine.


#15

Quote:
The hp65 had built in decimal to octal and octal to decimal.

The engineers developing it may have put it in because they would find it useful themselves; octal was the base used in HP's internal calculator development tools from the HP-35 through the HP-41C and Voyager series. They only started using hexadecimal with the Saturn architecture introduced with the HP-71B, though most of the people outside HP that worked on reverse-engineering the HP-41C and writing their own microcode for it used hexadecimal.

The first-generation processor used in the HP-65 could only deal with BCD digits (0 through 9) and did not have any pure binary mode, so the only way they could have worked with hexadecimal and displayed it would have been as two-digit groups ("00" through "15").


#16

One of the more improbable places to have found a hexadecimal conversion capability was in the Radio Shack Color Computer which came out in 1980. In it's extended BASIC the user simply entered PRINT HEX$( n ) where n was a decimal number between 0 and 65535 and the machine responded with a hexadecimal number between 0 and FFFF .


#17

The processor for the Tandy Colour computer was a Motorola MC 6809.
Very nice little 8/16 bit CPU for it's time. The 6809E was a true
MULTIPROCESSING oriented CPU. Very interesting piece of hardware
which didn't go all that far, but was used quite a bit in Japanese
(CANON) electronic typewriters (built-in full function word processor with graphical layout tools anbd spell checker in system ROM, circa 1985.)

The 6809E was also used in a video game by Taito in 1983, called QIX. One could easily write high level compilers (like PASCAL, C)for the 6809 due to the really nice addressing modes in the instruction set. There was also a disk OS, "OS9".

DW

#18

Quote:
...octal was the base used in HP's internal calculator development tools from the HP-35 through the
HP-41C and Voyager series.

It also was the base used in the HP3000 minicomputers, which were stack-based machines too. I remember using my HP-16C in octal mode many times while poring through HP3000 stack dumps.


#19

Quote:
[octal] also was the base used in the HP3000 minicomputers

And also the HP-2116/2100/21MX/1000 family of 16-bit minicomputers,
which were not stack-based the way the 3000 was. It appears that the 21xx/1000 family was heavily used for calculator development into the early 1980s, after which 3000 (MPE) and 9000 (HP-UX) systems were used.


#20

I remember drooling over the 16C when it first came out, but couldn't afford it on my meagre student's budget.

Now here it is, years later, when I really need one, I *still* cannot afford a (real) HP-16C! :-)

I'm a software engineer writing network code and looking at bits in the debugger. I've got several calculator emulators and other tools on the PC; I've got several calculators accessible to me on my desk or from co-workers, and it would STILL be probably a lot easier if I just had the 16C. What a tribute to the HP way!

PS: Aside from spending all my birthday money for the next 3 years at eBay, what are my other options? How hard would it be to do an RPL emulation of the basic binary operations, either on the HP-28S or the HP-48G? (It's a "G" not a "GX" or "G+" so memory is limited.)

-- john b


#21

Cameron Paine has written a very nice simulator for the 16C.
Well worth a look. You can write to him at:

cbp@null.net

If you can't find him, I could dig up the URL. I think he has written in the forum a bit too.

Nice simulator; though it's still in alpha I haven't noticed any bugs yet. Very nice machine, too. Indispensible for computer software (or hardware) engineering.

DW

#22

I have always liked Håkan Thörngren's 16C simulator (http://www.hpcalc.org/details.php?id=317). It's 11KB so it would fit on a 48G if you didn't have much else installed.

For the space conscious, Craig A. Finseth has a User-RPL version that is only 1.4KB (
http://www.hpcalc.org/details.php?id=319).


#23

There's also the HP 16c "emulator" (technically I believe it is also a simulator) by Jake Schwartz, available at:

http://www.pahhc.org/mul8r.htm

It's pay-ware, but it's a good package, only $25, full-featured, and works as you'd hope it would.

-cam


#24

Hi Megarat. Thanks for the info. I heard of Jake Schwartz,
but I can't remember where. I know he is an old hand...

Re your last post: Actually it's "just" a SIMULATOR. So you are right to point this out.

An emulator is HARDWARE (usually but not always plus software/ firmware) which "emulates" (dictionary def.) some system or processor.

Simulators are good, well written ones are great, and a bug free
one is a piece of software "art", imho.

But... they are NOT emulators...

I only wrote this post because a few people I have seen on the 'net who wrote simulators are so (perhaps justifiably) proud of them that they call them emulators, insisting that their "emulator" is so good it's different. Well it isn't... it's software.

(Calling a Goose a duck does not make it a duck; it's still a goose. Once a lot of people start calling geese ducks then they're well on the way to some real confusion...)

This is only offered as my opinion: Calling a simulator an emulator brands one as an amateur... (which is okay of course).

All the best everyone.

DW


#25

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#26

Yes. IBM invented computer emulation in the early 1960s, which they defined to be simulation assisted by special-purpose hardware and/or microcode. It was first done on the System/360; various models emulated various older IBM computers. For instance, the 360/30 could emulate a 1401.

None of the published calculator simulators I've seen use any special-purpose hardware or microcode.

HP did have a Saturn emulator for their own software development efforts. It was a moderately large box. I haven't been able to get my hands on one of those, but I do have two of their Saturn memory emulator boxes.

Eric Smith

(author of the Nonpareil simulator)

#27

Quote:
So, Emu41 & Emu71 by J. F. Garnier, V41 by Warren Furlow, Emu42 & Emu48 by Christoph Giesselink are all 'simulators', not 'emulators'?

Right. If you can download it (ie, software only), then it's only a simulator. An emulator includes hardware. Microprocessor simulators can be had for the downloading, but emulators have a pod that actually plugs into your printed circuit board, into your microprocessor socket, and runs just as the processor would, in the actual hardware, but gives you a "window" into the insides and some extra control too. It is unfortunate that the distinction has been fading as the term "emulator" gets abused so much, because we do still need a way to differentiate the two. It probably doesn't matter as much in the world of calculators where there's often no I/O at all except the keyboard and display, but it sure makes a difference where there's a lot of industrial I/O on a printed circuit board. That kind of thing cannot be emulated merely in software. I have an American Automation emulator catalog here someplace from some years back, and every emulator in it is thousands of dollars and has a pod to plug into your board, with cables that go to the PC.

Edited: 13 Sept 2005, 4:58 a.m.


#28

Quote:
It is unfortunate that the distinction has been fading as the term "emulator" gets abused so much, because we do still need a way to differentiate the two.

On the other hand, it is also useful to differentiate between on the one hand simulating the hardware (e.g. VMWare, Nonpareil and EMU48) and running the original software (Windows, Linux, calculator ROM) and on the other simulating a software or user interface at a higher level (e.g. WINE and Joseph M. Newcomer's HP16c).

WINE stands for "Wine Is Not an Emulator", this does not mean it is not a hardware emulator, because that is obvious, it means it does not emulate/simulate the hardware but the Windows API.

Since most people have no experience with hardware emulation, it is natural that the term emulator is recycled to provide this distinction. The kind of bugs or problems one can expect from the two kind of software simulators/emulators are very different.


#29

I'm with Gunnar on this one. I remember the original microprocessor in-circuit emulators (in fact, I used to sell the MDS-II and ICE for Intel), which would emulate the processor in hardware, with additional functionality for debugging.

However, in these days of amazingly fast processors, it's possible to emulate one processor, using software running on another. So, for example, a Mac can emulate a PC, and thus run PC applications like MS Office. A different program that ran natively on a Mac, but presented the same user interface and functionality, would be a simulation of MS Office. Same result, different ways of getting there.

So, over years of usage, I've concluded that when one runs the original code on a different hardware platform, as Eric does with Nonpareil, that's emulation of the original hardware. However, a program that behaves the same way as an HP calc, but is not based on the original calculator firmware, is a simulation.

Over thirty years in this business, I've never had trouble communicating with these terms until recently. The definitions I use are consistent with IBM mainframe microcode emulation of earlier systems, simulation of physical/economic/ecological systems (remember doing foxes & rabbits with fourth-order Runge-Kutta?) and with a lookup of emulate & simulate on dictionary.com that I did a few minutes ago.

Personally, I suspect the growing confusion is mostly down to the WINE project promulgating their own definitions. IMHO, WINE emulates Windows, and they should get over it. Bah! Time for a second coffee, so I won't be so cranky.

Best,

--- Les

[http://www.lesbell.com.au]

#30

Quote:
An emulator includes hardware.

So, PC running Emu48 is an emulator. Right?


#31

For the 5000th time, please stop stealing my name.

#32

NO.

Any PC running any program that runs as a virtual
calculator, etc. is:

a PC running a SIMULATOR (!)

(Your Goose honks!, It will never be a "Duck")

(For people who can't get the point the easy way is :
if it has wires hanging out of it that go to a test clip that
sits in the processor socket of a test system it's an EMULATOR.
If it's just software its ALWAYS, and I mean ALWAYS, a SIMULATOR.)

It's pretty simple, really.

DW

Edited: 14 Sept 2005, 3:40 a.m.


#33

So the PDP-11 mode of the VAX is what?

(Hardware support for 16 bit emulation, but no wires sticking out. 8)


#34

The "wires sticking out" analogy was perhaps rather vulgar, but not totally irrelevant. To be a true emulator of a calculator that has I/O other than keyboard and display, I would consider it to have to be able to take the same modules the calculator does (since modules are about the only other hardware the calculator would interface to). Someone mentioned the HP-48. I'm not very familiar with that line, but I believe at least one of them could take plug-in modules. I would say the HP-48 emulator would have to be able to handle the HP-48 modules, at least if the modules were other than RAM (which a PC does not lack). In the case of a 41, the emulator would have to be able to handle any module the 41's can, including ROMs, the timere module (which can wake up the calculator), HPIL, the first non-HPIL printer (I think 82143 was the number), the optical wand, the card reader, non-HPIL data acquisition accessories, etc..

A microprocessor emulator can be used to troubleshoot your computer board hardware, not just debug software. It actually plugs into the board and uses the board's memory, I/O, and so on. Similarly, I would expect that a calculator emulator should be able to help troubleshoot module-interfacing problems, which might be bus contention, timing problems, etc.. A mere simulator cannot do this.


#35

I'm reminded of one of the arguments against "hard" artificial intelligence. A very clever person from UC Berkeley made it. I can't recall his name at the moment. He asked if you would expect a stomach simulated in software to digest food the way a human stomach does. the point being that a simulation of intelligence on a computer is unlikely to start thinking at all, let alone like a human. I find that pursuasive argument, but the point in this context is that an emulator would be like an artifical brain, with artificial neurons whose functions would be close analogies of "real" neurons. It might actually think on its own.

Back to my point about the PDP-11 mode in the VAX-11 CPU. This involved a microcode implementation of the 16-bit PDP-11 instruction set on the 32-bit VAX-11 CPU. The mode was changed by issuing a single instruction, if I recall correctly. You could do that with one VMS task, and the OS would manage switching in and out of PDP-11 mode as context switches were taken. I had my first course in assembly language in PDP-11 Macro Assembler running on a VAX in emulation mode. It sounds to me like this fits the definition of an emulator, since the machine interacts with the rest of the hardware like a PDP-11 when it is in that mode.

But what about the new(old) "virtualization" systems like VMware? This is a hybrid system in which the virtualization layer intercepts hardware access by the "guest" operating system, and provides services to that guest by interacting with the "host" OS and real computer hardware. That seems to have elements of emulation and simulation going at the same time.


#36

VMware is a simulator, with access to real hardware (which is not missing). There is nothing special about it apart from it being a really nice piece of work.

The DEC VAX doing PDP-11 emulation requires a bit of hardware to do that, but it is all built-in to the system logic.

DG did a similar thing around the same time (for the same market)
with it's 32 bit Eagle, back compatible to their 16 bit NOVA, selected on the fly with a "toggle" bit, in each 32 bit Eagle instruction. There is a really nice book about it's development in a book by very savvy Vietnam war journalist, Tracy Kidder. The book is called " Soul of a New Machine" A very good read!

DW


#37

Yup, I read that one when it came out. They had a Nova at my JC, so that was interesting. Then the next year they decommissioned the school's Nova and bought a VAX. And the rest is (obscure, personal) history. 8)

#38

Hi Garth. Right on, brother.
(Why do we bother?)

DW

#39

Hi,

From this point of view (which is in many respects correct), an EMULATOR is a piece of hardware that exactly replaces (emulates) a component or system.

So you should agree that my Emu41/71 are HP-IL system emulators: with the help of the HPIL/PC board they exactly replace a HP41- or HP71-based HP-IL controller :-) Except regarding the timing aspects maybe, but I never called them Real-Time-Emulators... (Actually they are much faster).

More seriously, the point of view from Don is for sure correct for ICs: an emulator is a piece of hardware that replaces the original IC, a simulator is software only. From this point of view, the core code that simulates the calculator or system's CPU should be called a SIMULATOR.

I just notice there are industrial (not 'amateur' ;-) counter-examples against this definition like the 68040 that 'emulates' by software some floating point operations originally handled by hardware on the 68881/2... And many FPU were fully 'emulated' by software on the time (for cost reasons) and many compilers were providing FPU 'emulation' packages.

But calc 'emulators' are more than just CPU simulation, IMHO. They are complete calc emulation in the sense that the PC or PDA hardware/CPU simulator/GUI combination exactly replaces the calc in all respects (same response to the same stimuli) in a very similar way than an IC hardware emulator replaces the actual IC, with the same kind of features (access to the internal) and limitations (different size, power consumption, etc...).

Again, it's just MY opinion and I fully accept and respect the other point of view, I'm a quite cool man and I'm not interested in fighting for just a question of words...

J-F


Edited: 15 Sept 2005, 12:10 p.m.


#40

Hi Jean-Francois,

Well said :-)


#41

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#42

Hi Hrast. Have you done any more work on your SIMULATOR?

Hi Don,

I don't have any simulator yet, so I cannot work on it. I really didn't want to be involved into this discussion because I don't care if you call it 'simulator' or 'emulator' or 'interpreter' or whatever you want. I tried to find some appropriate name but 'emulator' was used by other people at the time I started to work on this software so it cannot be THAT wrong. Perhaps something like 'HP-41 machine code interpreter' would be a better name but I think I won't bother with this ...

#43

Hi Monsieur Garnier.

Hey I just want to say congrats on your nice emulation of hp-il along with Chris Klug and his ISA card.

No doubt your system is an emulator with the card and a simulator without!

On coprocessors: well that seems a tricky one until you look closely.

Motorola or Intel, it doesn't matter. The software is an emulator because (according to a MORE PRECISE definition of emulation, which is exactly replacing non-existent hardware in a HARDWARE SYSTEM)
the software actually couples very, very closely with specialized software flags which precisely "couple" (and therefore "mimick")
flag pins on the missing co-processor.

I'd say an emulator is a system which has to couple to hardware it is directly involved with (even if some of that is meant to be emulated when missing, such as a co-processor). In general it is supposed to be able to run a the full speed of the target system. If it goes faster, then thats fine.

Strictly speaking, full hardware emulators MUST EXACTLY DUPLICATE ALL TIMING...

The calc simulators are categorically simulators because there is NO SPECIFIC HARDWARE underneath them. You cannot run a HP-WAND on your simulator. Also, you CAN recompile the simulator to run on something very different to an IBM PC. So there is no hardware dependence. Further, the software does not DUPLICATE the calculator, there IS NO REAL display, just a visual picure.

This difference may seem to you to be nit-picking; it definitely is not.

Flight simulators look real; they are a complete, very accurate fantasy. This the correct use of the term simulator.

There is nothing derogatory about the term simulator, so software
writers don't need to be sensitive about it. But many are, for some strange reason. They also don't use their real name... Do they, Hrast. ;-)

My original point was just to try to educate a few people for the purpose of dispelling the loose use of language. It doesn't matter if people want to ignore me, I don't care. I still hear their geese honking and other peoples ducks quacking, and can easily tell the difference.

Once I did college with a student who pressed the lecturer on the concept of emulating or simulating certain hardware functions like multipliers, shifters in software, etc...

He went so far with his quesioning (along the lines of: Well can't you do THAT in software too...) that eventually the lecturer got exasperated and said words to the effect that "You have to have SOME hardware t run it all on!", smoke starting to come out his ears as he spoke.

I wrote in a small (vain?) attempt to dispel similar ignorance on the part of a few people in the programming community. Their loss if they don't want to listen.

Interesting thread though and I really appreciated your comments.

DW


Edited: 16 Sept 2005, 1:38 a.m.


#44

We all know that Wikipedia is the fount from which flows all wisdom (all hail, Wikipedia!) The article on emulation seems to draw a distinction between what it calls a "software emulator" and a "simulation." It doesn't mention hardware emulation at all. (You might want to go add to the article, Don. The hardware angle is missing, and strikes me as important.)

Quote:
An emulator, in the most general sense, duplicates (provides an emulation of) the functions of one system with a different system, so that the second system appears to behave like the first system. Unlike a simulation, it does not attempt to precisely model the state of the device being emulated; it only attempts to reproduce its behavior.

In a technical sense, the Church-Turing thesis implies that any operating environment can be emulated within any other. In practice, it can be quite difficult, particularly when the exact behaviour of the system to be emulated is not documented and has to be deduced through reverse engineering. It also says nothing about timing constraints; if the emulator does not perform as quickly as the original hardware, the emulated software may run much more slowly than it would have on the original hardware.

A common form of emulation is that of a software emulator, a piece of computer software that allows certain computer programs to run on a platform (computer architecture and/or operating system) other than the one for which they were originally written. It does this by "emulating", or reproducing, the behavior of one type of computer on another by accepting the same data, executing the same programs, and achieving the same results.


#45

Quote:
They also don't use their real name... Do they, Hrast. ;-)

Actually they do...
He is Hrastprogrammer the way I could be Gnerprogrammer :-)

BTW: Hi Hrast (real names obfuscated), I spent my holidays in Croatia this summer. Nice country, nice people, nice food!!!

Massimo


#46

Hi Massimo,

I am very glad to hear that your enjoyed your holidays here. We are planning to go to Italy for a few days, perhaps this autumn or the next spring, I don't know yet.

Actually they do... He is Hrastprogrammer the way I could be Gnerprogrammer :-)

Yes, Hrast is just a "half" of my full name :-)

Best regards.

Hrast


Edited: 16 Sept 2005, 2:45 a.m.

#47

They also don't use their real name... Do they, Hrast. ;-)

I don't think my real name is that important for this forum. And there is a simple practical reason for such name: my real name contains 2 characters which exist in my mother language but doesn't exist in english so replacing them with some "equivalents" would not be my name, either. Friends and users of my software know my real name, of course.


#48

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#49

OK, accepted.


Edited: 22 Sept 2005, 3:01 a.m. after one or more responses were posted


#50

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#51

There is more then one person using the name '.'. I was the first. I don't know who the other guy is.

My 2 cents: I feel that if a program appears to the user to do the same thing as a calculator, but is rewritten 'under the hood', it is a simulator. If it actually runs the calculator's machine code via a software interpreter, it is a software emulator.

English is an evolving language. I think the term 'emulator' is perfectly acceptable.


.


#52

I will certify that there is an "authentic original recipe" "dot" and also one or more other persons that have posted with "dot". (Not that the other dots are necessarily inauthentic, but they are other persons anyway).

I have seen "dot" on this forum for more than two years.

I also share Don's feeling that this forum is awesome. I was blown away to discover this community--and that such luminaries as Hrastprogrammer and a number of other heavy-hitters in the calculator business do check in from time to time and add insightful dicussion.

Edited: 16 Sept 2005, 7:19 a.m.


#53

Thank-you for your support.

.

#54

Quote:
my real name contains 2 characters which exist in my mother language but doesn't exist in english

As a side note I thought about registering as Neherootchee to give an hint as how to spell my last name... ;-)


#55

You must have some familiarity with the way Americans tend to butcher languages other than English, Massimo! 8)

(Actually, many of us do a fair job of butchering English, as well.)


#56

Quote:
You must have some familiarity with the way Americans tend to butcher languages other than English, Massimo! 8)

I think this is not an american peculiarity: I've seen and heard aberrations of every kind here (from my own hand and tongue, for one) too.

Greetings,

Massimo

#57

Hi Don,

[deleted]

Back to the subject:

- the flight simulator example is very interesting, the key point is that a simulator can't replace the original system: you can't travel with a flight simulator.

- I work in the electronic industry: the concepts of emulation and simulation are part of my everyday life. When the design engineer is in front of his workstation, he (or she) is dealing with simulation. When the test engineer is debugging his test program before the availability of the actual device, he/she has to use an emulator. The key point, again, is that a simulator can't physically act as the real thing, the emulator yes.

Just different points of view... It depends where you sit ...

Regards.

J-F

(Interesting thread, indeed).

P.S. added:
Finally, it may really just depends where you sit: a prod engineer would see Emu48-like software as simulator, because he can't use it to replace (emulate) a real HP48 on the production line, but users can call it emulator because it can replace the HP calculator in all aspects. Maybe we could all agree on this, especially as there are no more HP15/HP41/HP42/HP48/HP71 prod lines...

Edited: 28 Sept 2005, 6:42 a.m. after one or more responses were posted


#58

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#59

Hi,

I think that many engineers read this forum, myself included. Whilst the term 'emulator' had a specific meaning in an EE context, the word has evolved.

Since you are asking for evidence to back up my opinion I'll try and provide some.

Do a google search for 'emulator' or "what is an emulator". If you examine the results I think you will find the common usage of the word fits my description. Try searching for "define:emulator" w/o the "'s.

A common definition seems to be

"A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system."

The English language is ambiguous. We are both right in different situations.

Thanks,

.

#60

I concur with your understanding, J-F.

As someone who has spent many hours sweating inside flight simulators, I can tell you that the reason they're a simulation is that they don't work the same way as a real aircraft. The old analog simulators (like the ATC-810 I knew and "loved") had almost nothing in common with an aircraft cockpit other than appearance and approximate behaviour - behind the panel it was completely different, with the "air speed indicator" being electrically driven, rather than by dynamic air pressure, for example. With the advent of digital cockpits, it's possible for there to be a lot more commonality between the simulator and the real aircraft, but there are still obvious distinctions.

I think some people have gotten hung up on the term "emulator" as derived from the original microprocessor In-Circuit Emulator. I remember (I used to be a sales engineer for Intel, in a previous life) the Intel MDS systems, where ICE was introduced, in part because the 8080 was entirely dynamic internally - it *had* to be clocked, so you couldn't stop it in the middle of execution and examine the various pins (unlike the later Z-80, for which this was a major advantage). So the ICE was introduced as a device which plugged into the 40-pin socket where an 8080 would go, but would allow single-stepping through ROM code while examining what was going on.

Yes, it was hardware - it had to be, obviously - and it was a very faithful emulation of the 8080's internals. But it's just *one* type of emulator; there are others, including those implemented in software.

Of course, in the end, it really doesn't matter whether you call a thing a simulator or an emulator, as long as it does what you need sufficiently faithfully.

Best,

--- Les

[http://www.lesbell.com.au]


#61

"emulate." Webster's Third New International Dictionary, Unabridged.
Merriam-Webster, 2002. http://unabridged.merriam-webster.com (16 Sep. 2005).

Main Entry: em·u·late 
Function: verb
Inflected Form(s): -ed/-ing/-s
Etymology: Latin aemulatus, past participle of aemulari, from
aemulus rivaling, envious, akin to Greek aitia cause -- more at
ETIOLOGY
transitive verb
1 a : to strive to equal or excel : imitate with the intention of
equaling or outdoing <a simplicity emulated without success by
numerous modern poets -- T.S.Eliot> b : IMITATE <book-covering
materials which one way or another emulate leather -- Book
Production> <some of the early Protestant congregations emulated
this custom, but soon gave up the practice -- American Guide Series:
Louisiana>
...

"simulate." Webster's Third New International Dictionary, Unabridged.
Merriam-Webster, 2002. http://unabridged.merriam-webster.com (16 Sep. 2005).

Main Entry: sim·u·late
Function: verb
Inflected Form(s): -ed/-ing/-s
Etymology: Latin simulatus, past participle of simulare to imitate,
represent, feign, from similis like, similar -- more at SAME
transitive verb
1 : to give the appearance or effect of : FEIGN, IMITATE <felt
obliged to simulate reluctance, and the air of having had her hand
forced -- Edith Wharton> <to simulate real mink, the muskrat pelts
are let out -- Pete Barrett> <pegs in the oak flooring further
simulate pioneer construction -- American Guide Series: Arkansas>
...

"jargon." Webster's Third New International Dictionary, Unabridged.
Merriam-Webster, 2002. http://unabridged.merriam-webster.com (16 Sep. 2005).

Main Entry: jar·gon   
Function: noun
Inflected Form(s): -s
Etymology: Middle English jargoun, from Middle French jargon,
probably of imitative origin
1 : chatter or twitter especially of a bird or animal
2 a : confused unintelligible language ...
3 a : the technical terminology or characteristic idiom of
specialists or workers in a particular activity or area of
knowledge; often : a pretentious or unnecessarily obscure and
esoteric terminology b : a special vocabulary or idiom fashionable
in a particular group or clique
4 : language vague in meaning and full of circumlocutions and long
high-sounding words
synonym see DIALECT

#62

Try http://www.dictionary.com, Howard; it seems to be more up-to-date. To my mind, it makes the key distinction quite nicely (reformatted slightly by me):

sim·u·late:

3. To create a representation or model of (a physical system or particular situation, for example).

em·u·late

3. (Computer Science). To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system.

So a simulation is a model, with different internal operation; an emulation operates the same way internally. To bring this closer to home, a Java applet that looks and behaves like an HP-35 is a simulation, while a program like Eric Smith's Nonpareil, which uses the HP ROM code (and therefore executes the same program), is an emulation of the hardware.

Best,

--- Les

[http://www.lesbell.com.au]

Edited: 16 Sept 2005, 11:37 p.m.


#63

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#64

Quote:
Howard, would you consider a duck quacking or a gooses honk or hiss to be jargon? Just wondered...

In the Webster's Unabridged sense, you bet! 8)

(Check the first definition of "jargon" above.)

#65

I'm not disputing those definitions. I'm pointing out that they are part of a specialized vocabulary. The Webster definitions are from 2002, not that long ago. The difference is that the Webster's unabridged dictionary, which doesn't try to be anywhere near as exhaustive as the OED, contains definitions developed from a broad sampling of English letters. In that context, the words "emulate" and "simulate" are nearly synonymous. If you look in the Webster's Collegiate dictionary, you'll find definitions closer to the ones you reference. That's because the Collegiate edition gives more weight to college textbooks, which are more likely to contain specialized jargon.

Most dictionaries are based on a sampling of some body of literature. The literature of Computer Science is pretty broad now, so I guess it makes sense to try and pin down meanings for specialized words like "emulate" and "simulate." The trouble with that is the rate of change keeps accelerating, and meanings can shift in that tide. To me, a "simulation" is a software system that implements a model of some sort. What you are modeling could be just about anything, hardware, software, physical process, anything. This usage seems to be in broad use in today's Computer Science literature. (c.f. An Interdisciplinary Approach to Scientific Modeling and Simulation and Fitting Time-Series Input Processes for Simulation, among many, many others.) Emulation, on the other hand, I have always understood to mean something like "simulation of some sort of computer hardware." I don't have any nice ACM citations to point to for that meaning, however. There isn't an "emulation" keyword in the "Guide," (there are several for "simulation") but an advanced search of current titles for "emulation" shows lots of hits for "hardware" or "physical" emulation. These seem to match Don's definition of one set of circuits acting like another set for testing, development or experimental purposes. There are a few hits for "software" emulation as well, and these seem to describe systems like EMU41 or VMWare. So if this extremely informal sampling held up in the face of exhaustive analysis of the literature, a computer science dictionary editor would probably place Don's definition first, with JF's as a secondary meaning.

But there isn't a standard English dictionary of Computer Science that everyone recognizes, as far as I know. So there's room for this sort of discussion. Aren't we lucky? 8)


#66

This Message was deleted. This empty message preserves the threading when a post with followup(s) is deleted. If all followups have been removed, the original poster may delete this post again to make this placeholder disappear.


#67

WIMP - Wine is maybe a platulator. 8)

#68

Yes, and Nonpareil (as well as V41 and other such programs) simulate the working of the real HP calculator at one particular level of abstraction: the execution of the microcode. Some other simulators operate at a higher level of abstraction, such as user keystrokes; in principle another simulator could work at a lower level of abstraction such as an RTL (Register Transfer Level) description of the hardware, a netlist of the transistors, or even a GDS-II physical description of the chip geometry. It is silly to try to decide that simulating at some of those levels should be called "emulation", and others "simulation".

The definitions o fthe words "simulate" and "emulate" in Websters are general usage, not specific to Computer Science (or industry). IBM invented computer emulation in 1962 or so (announced in 1964). They defined emulation (with regard to computers) as simulation assisted by special-purpose hardware and/or microcode.

I don't have references close at hand to offer specific citations, though I remember that the invention of computer emulation was discussed in IBM's Early Computers by Bashe et al and in A History of Modern Computing by Ceruzzi, as well as many other books. The historical record on this is very clear; I don't really understand why it is controversial.


#69

Quote:
It is silly to try to decide that simulating at some of those levels should be called "emulation", and others "simulation".

It's a useful distinction, Eric. I would expect Nonpareil, since it runs the actual firmware of an HP calculator, to exhibit the same bugs, limitations on accuracy, and BCD representational artifacts, etc. as the real calculator. On the other hand, I would expect a simulation, written in C, C++ or Java using IEEE floating point, to be different in those respects, as well as perhaps others.

Best,

--- Les

[http://www.lesbell.com.au]


#70

But a microcode-level simulator is potentially not as accurate as an RTL-level simulator, so maybe only the latter should be called an emulator...

But an RTL-level simulator is potentially not as accurate as a netlist-level simulator, so maybe only the latter should be called an emulator...

But a netlist-level simulator is potentially not as accurate as a Spice simuulator, so maybe only the latter should be called an emulator...

I don't buy calling some simulators and some emulators based on how accurate they are, because it isn't clear what would be "accurate enough" to be called an emulator under that criterion.

Beside which, since IBM invented computer emulation, I prefer to use their terminology, which actually makes a useful and non-arbitrary distinction.

Eric


#71

Quote:
I don't buy calling some simulators and some emulators based on how accurate they are, because it isn't clear what would be "accurate enough" to be called an emulator under that criterion.

It's not based on accuracy, Eric - it's based on whether the emulation uses the original firmware/software or some other component of the original (not microcode, because, for mainframes, the emulation is done *in* microcode) or not. A simulation is "black box", and may have no code in common with the system being simulated (or it may be a simulation of a physical system); emulation is "crystal box", done with the intention of usefully imitating another system by running its software. There are no degrees of accuracy involved.

Simulation also carries connotations of modelling and experimentation (e.g. simulations of networks used to model spread of viruses and worms), whereas emulation is done primarily for pragmatic purposes, such as maintaining backwards compatibility over multiple generations of hardware - but this is weaker than the distinction I'm making above.

Quote:
Beside which, since IBM invented computer emulation, I prefer to use their terminology, which actually makes a useful and non-arbitrary distinction.

OK, here's their terminology:

emulation:

The use of software, hardware, or both by one system to imitate another system. The imitating system accepts the same data, runs the same programs, and achieves the same results as the imitated system.

(See http://www-306.ibm.com/ibm/terminology/ef.htm)

I don't see the distinction of which you speak. I contract to IBM, doing mainfame Linux stuff, including booting the Linux kernel from a virtual card reader, then using emulated network connections and so on, so I use their terminology a lot. It's also noteworthy that the z/Architecture "Principles of Operation" manual (IBM document SA22-7832-03), which describes the mainframes where z/VM allows one machine to look like many, and you would expect to extensively discuss this, never uses the term "emulate" or "emulation". So it's not as simple as saying "IBM does it this way, so there".

All I can say is that in over 20 years of teaching courses on operating systems and applications software, these are the usages I've commonly encountered, and the context usually provides clarification when required. It's only in the last few years that confusion seems to have arisen (probably WINE-induced), and mostly in the last few days. ;)

Best,

--- Les

[http://www.lesbell.com.au]

Edited: 20 Sept 2005, 8:53 p.m.


#72

emulation:
The use of software, hardware, or both by one system to imitate another system. The imitating system accepts the same data, runs the same programs, and achieves the same results as the imitated system.

This one is good, too:

"Emulation program:

(2) A control program that permits functions written for one system or device to be run on another system or device."

If IBM, who invented emulation, agree that emulation can be "the use of software, hardware or both" I think we can finally close this "simulators vs. emulators" discussion.


Edited: 22 Sept 2005, 9:19 a.m.

#73

Quote:
Beside which, since IBM invented computer emulation, I prefer to use their terminology, which actually makes a useful and non-arbitrary distinction.

I would have to point out however that word meanings are not the only thing to have evolved. Back then, computers were mostly for data processing, which is only a tiny portion of today's computer field. If you use a computer for controling equipment and taking data in real time, simulation alone is almost hopelessly inadequate in my experience. The simulator (software only) requires files of simulated stimuli upon which to act at certain times. "Real-time" refers to the requirement to meet stringent deadlines. The controlling computer in one of our products switches tasks 4,000 times per second. If I were to slow it down just a little for debugging during development, the hardware it is connected to would not work at all. For that, a simulator would somehow have to simulate that remote hardware as well, whereas my workbench computer which I used to emulate the final controller actually connected to, and operated, the other physical hardware. The benefit of using it as an emulator (as opposed to just trying to develop the software on the actual controller that would end up in the product) was that the workbench computer offered the added tools to interact with and modify the software on the fly while watching the behavior of the whole system on the workbench instruments.

Again for most calculators, since they have no I/O other than the keyboard and LCD, there would be virtually no diffence between a simulator and an emulator; but for something like the 41, the mere simulator would not be able to get programs off my old microcassettes, read bar code from my HP-41 books, or read the programs from various modules. That would require the emulator.

I suppose the actual names don't really matter to me, as long as the distinction between the two is maintained. I'm not aware of any other terms in use at this time though.

Edited: 21 Sept 2005, 1:57 a.m.

#74

Quote:
Of course, in the end, it really doesn't matter whether you call a thing a simulator or an emulator, as long as it does what you need sufficiently faithfully.

This discussion reminds me of something C.S. Lewis wrote in the Preface to "Mere Christianity." He was not talking about technical terms, but rather about the use of descriptive terms as terms of praise. However, I think it makes quite a good argument for preserving the meanings of words. (I apologize in advance for the long quote.)

"The word gentleman originally meant something recognisable; one who
had a coat of arms and some landed property. When you called someone
a gentleman you were not paying him a compliment, but merely stating a fact. If you said he was not a gentleman you were not insulting him, but giving information. There was no contradiction in saying that John was a liar and a gentleman; any more than there now is in saying that James is a fool and an M.A. But then there came people who said - so rightly, charitably, spiritually, sensitively, so anything but usefully - 'Ah, but surely the important thing about a gentleman is not the coat of arms and the land, but the behaviour? Surely he is the true gentleman who behaves as a gentleman should? Surely in that sense Edward is far more truly a gentleman than John?' They meant well. To be honourable and courteous and brave is of course a far better thing than to have a coat of arms. But it is not the same thing. Worse still, it is not a thing everyone will agree about. To call a man a gentleman in this new, refined sense, becomes, in fact, not a way of giving information about him, but a way of praising him: to deny that he is a gentleman becomes simply a
way of insulting him. When a word ceases to be a term of description and becomes merely a term of praise, it no longer tells you facts about the object: it only tells you about the speakers attitude to that object. (A nice meal only means a meal the speaker likes.) A gentleman - once it has been spiritualised and refined out of its old coarse, objective sense, means hardly more than a man whom the speaker likes. As a result, gentleman is now a useless word. We had lots of terms of approval already,so it was not needed for that use; on the other hand if anyone (say, in a historical work) wants to use it in its old sense, he cannot do so without explanations. It has been spoiled for that purpose."


#75

I take the general point that word meanings tend to evolve away from the concrete toward the abstract; from the objective toward the subjective. But I disagree with Lewis' conclusion that such an evolution renders the words "useless." Lewis correctly observes that the meaning of the word "gentleman," used as a term of praise, depends more on the attitudes of the speaker than specific characteristics of the object of the praise. But that's a clue to the speaker's thinking, isn't it? The word is more ambiguous, but it's use in the hands of a skilled writer can be that much more subtle and revealing. (I also think that, in the specific case Lewis cites, the word "gentleman" is not merely a term of praise. In the specific cultural context he is referring to, "gentleman" connotes a specific kind of praise: adherance to an elaborate set of norms that have all sorts of consequences and implications.)

So I enjoy hearing what JF, Don and the rest think words mean to them. This discussion taught me about the specific meaning of "emulator" to an electrical engineer, but also more about how Don views his world. I learned more about JF's technical world view, and also about certain attitudes he has about the meaning of the word "polite." These are gems, far exceeding the value of any concrete definition of any specific word!

#76

Wayne, you are a scholar and a gentleman! ;)

Of course, what this reflects is that language is a living thing; it evolves and changes over time - look at current teen usage of terms like "sick", "evil", etc. which mean the opposite of what old f*rts like me think.

My wife, who has a degree in Linuguistics and English Lit., long ago convinced me that dictionaries are more descriptive than prescriptive.

It doesn't matter how much one protests - I've given up the struggle over the redefinition of the term "hacker", for example - so I probably shouldn't have joined the fray over emulation vs simulation. . .

Best,

--- Les

[http://www.lesbell.com.au]

#77

Wow, thanks for the feedback!

I'll be sure to check out these sources. (Just for the morbidly curious, I'll post back here to let folks know what option I finally select.)

Double wow on the Simulator / Emulator discussion that followed!
There must be something metaphysically wrong with me that wherever I go, chaos follows! [ROTFL...]

--johnb


#78

Hi John.


Hey, Man - don't hog the credit for creating Chaos, if you don't mind... :-) Some of that has to come my way, oh and Bill Gates, too.

DW

#79

That's pretty cool!

I just recently got a Mac Mini, after simmering with resentment at Steve Jobs for making the original Mac unaffordable for starving students, like I was in 1984. That was one heck of a resentment to last that long. 8)

But OSX is pretty cool. And that calculator is very nice. It can be an "RPN Programmer" as in "it has bit manipulation and base conversion" not as in "it's programmable." Although, with the degree to which OSX apps can be scripted, maybe I'm not too sure about that.

Some other nice features of the programmer mode:

  • The entire 64 bits in binary (in 2 rows of 32) are always visible, unless you turn that display off.
  • There are "ASCII" and "Unicode" buttons that will show the character corresponding to the currently displayed number over on the left hand side of the screen.
  • It has byte and word flips, great for switching big and little endian. Hmm, PPC, no wonder 8)

And the bad stuff:

  • No stack! At least not a visible one.
  • No X<>Y or swap.
  • No roll-down, roll-up, over, pick or anything like that. In short,
  • No stack!

ENTER does something stack-like.

But the other complaint I have is that it basically has the three modes, and the RPN switch, and that's it for customization. Oh yeah, and no stack!


Edited: 29 Aug 2005, 1:45 a.m.


#80

 Howard Owen wrote about the OS X RPN calculator:
>No stack! At least not a visible one.

It has a stack, although its not visible. It looks like its deep too. Try

1 ENTER 2 ENTER 3 ENTER ... 9 ENTER 10 + + + + + + + + + +

and you will see 55.

So it has a stack.

**vp

#81

Hello Howard & fellow Mac lovers,

You can find OS/X compatible Voyagers by Ric Lira at :

Voyagers for MacOs

Unfortunately, the Hp-16C is still missing...but the stack is there too.

Bye.

Etienne

Edited: 31 Aug 2005, 7:23 p.m.


Possibly Related Threads...
Thread Author Replies Views Last Post
  Bought a 16C to compensate a little Eelco Rouw 23 886 12-07-2013, 01:26 PM
Last Post: Eelco Rouw
  HP Prime: Long integers (continued) Helge Gabert 2 190 11-07-2013, 11:24 AM
Last Post: Helge Gabert
  HP Prime: Pass "Long" Integers to a Program Helge Gabert 6 350 11-03-2013, 01:12 PM
Last Post: Helge Gabert
  HP Prime polynomial long division bluesun08 13 531 10-30-2013, 03:29 AM
Last Post: parisse
  Shiny new 16C! Keith Midson 7 340 10-27-2013, 02:22 AM
Last Post: Keith Midson
  Joys of eBay: HP-32S, HP-32SII, HP-42S, HP-16C, ... Sasu Mattila 7 310 09-23-2013, 04:39 PM
Last Post: Julián Miranda (Spain)
  HP-16C simulator fhub 12 500 06-30-2013, 10:14 PM
Last Post: Robert Prosperi
  Program for HP-16c... Leonid 9 398 06-07-2013, 08:51 PM
Last Post: David Hayden
  A very long HP-17BII equation Gerson W. Barbosa 22 751 04-19-2013, 12:37 AM
Last Post: Gerson W. Barbosa
  A long WP-34S night Siegfried (Austria) 10 413 04-16-2013, 02:11 AM
Last Post: Siegfried (Austria)

Forum Jump: