new book on the Apollo Guidance Computer



#40

Just published.

Anyone interested in how early digital computers were designed to work aboard spacecraft will find this book fascinating. It illustrates how far we have come in the past 50 years, in terms of computer systems development. This should be required reading for all newly-hatched engineers and programmers.


#41

Hello!

Thanks for the information, this will make a welcome addition to my stack of books on the Apollo/early space program.

Quote:
This should be required reading for all newly-hatched engineers and programmers.

Well, applying the same principle, when I finished university 25 years ago (aerospace engineering) this would have meant reading books on abacusses and slide rules... No, I think it is far more important that newly hatched engineers have a thorough understanding of present, state-of-the-art technology!

Regards,
max


#42

Quote:
this would have meant reading books on abacusses and slide rules

Thanks, Max. Well, I admit abacusses (abacusi?) would be a bit much, but I'd bet that some of the slide rule fanatics among our little group would say that a new engineer would be a better engineer if he/she understood what a slide rule did, in the years prior to the HP-35.

On the software side, when I went to programming school (1968) we learned to "desk check" our code, which meant following the code, instruction by instruction, on paper with some test data to make sure it really did what you wanted it to. We were taught to do this even before running our program on the computer with real data, and we almost always found errors that otherwise might have slipped by us. I don't think the current generations of programmers are taught that, and despite huge advances in the software development and testing fields today, errors can still slip by.

Heck, we all know from experience that in order to find bugs in our RPN programs we sometimes just have to SST through it and look at the stack and registers along the way. Desk checking just means doing this manually before running it, and I don't think that is taught today in programming classes, and maybe it should be. There was an old saying my boss used to repeat: don't use the computer to debug your program. I think that is still good advice.

Don


#43

Don, thanks for the information. Having programmed pdp-11s in the late Seventies, this will be an interesting reader for me.

Quote:
Well, I admit abacusses (abacusi?) ...

I'd propose "abaci" but that's just my personal view.

#44

Quote:
I'd propose "abaci" but that's just my personal view.

This agrees with Merriam-Webster, my dictionary of choice. By the way, abacuses is the alternative plural form.

#45

Quote:
On the software side, when I went to programming school (1968) we learned to "desk check" our code, which meant following the code, instruction by instruction, on paper with some test data to make sure it really did what you wanted it to. We were taught to do this even before running our program on the computer with real data...

The reason you had to desk check your code before running it on the computer was because it was expensive and time consuming to run it on the computer. These days, it's much faster and easier to run the real data through the program itself to see if it gets the right answer. So the purpose for desk checking has largely gone away.

The one time when desk checking is useful is when you're changing a piece of code to deal with input that is hard to reproduce, such as a rare failure condition deep within the program. In this case, we usually get 2 or 3 different people to desk check the code.

Quote:
There was an old saying my boss used to repeat: don't use the computer to debug your program. I think that is still good advice.

Ask yourself "why is this good advice?" The computer is the perfect tool to debug your program and I think you should use it whenever possible.

The key is that you do need to debug your program. You need to feed it real-world input and, most important, you need to feed it erroneous input to test the error handling.

Dave


#46

Quote:
The reason you had to desk check your code before running it on the computer was because it was expensive and time consuming to run it on the computer. These days, it's much faster and easier to run the real data through the program itself to see if it gets the right answer. So the purpose for desk checking has largely gone away.

Thanks, David. I'd agree with that assessment to a large extent. Certainly mainframe computer time was expensive in the 70's, and where I worked they funded the computer division by charging the other divisions time for their use of the system. So, yes, desk checking was always encouraged.

I've been out of software development since the late 80's, and I wonder if the quality of the code produced by programmers today is comparable to the quality of code produced in the 70's and 80's. I suspect it is somewhat better, given the automated testing systems that exist today as well as peer reviews.

Interestingly, when I became a programmer in 1974 I was the only person who tested the FORTRAN code I developed prior to putting it into production. If I felt it was ready, that was good enough. I think those days have long since past!

Don


#47

Don,

More important than automated testing systems are the large standardized libraries of working code that handle much of the drudgery that was hand-coded over and over again. Many of the common bugs that were made in the past gone now.

The industry is also VERY slowly starting to realize that you need to think about reliability and maintainability when designing programming languages. Java, for example, does all the memory management for you because the designer realized that memory management bugs were common in languages like C. So he took the approach of designing the problem right out of the language.

Still, programming is, to a large extent, still in the dark ages. Ironically, it's about the ONLY discipline where you still work mainly with flat ASCII files.


#48

Quote:
Java, for example, does all the memory management for you because the designer realized that memory management bugs were common in languages like C. So he took the approach of designing the problem right out of the language.

David, can you give an example of what this means? I was always under the impression that Java and C were somewhat closely related (and, therefore, I stayed away from Java!), as opposed to VBScript and C which were totally non-related. For example, what does C make you do that Java does not?

thx, Don

Edited: 24 July 2010, 2:45 p.m.


#49

Hi Don,

The issue arises when the program needs to allocate memory at run-time to store some data. In C, you call malloc(), which returns a pointer to some memory that you can use. When you're done using the memory, you have to call free(), passing it the pointer that malloc() originally returned. If you don't call free(), then the program thinks that you're still using the memory. This is called a memory leak, and if it happens frequently, or if the program runs for a long time, it can eat up all the memory in the computer.

In Java, you don't have to call free() or anything like it. Instead, the runtime environment keeps track of whether you're still using the memory and when you no longer are, it automatically makes it available again through garbage collection.

The 50g calculator uses the same basic principle as Java: every now and then, it does garbage collection where it figure out which objects are still in use and which aren't, and gets rid of the ones that aren't in use.

Hope this helps,
Dave


#50

I see what you mean, thanks David.

On the last software project I worked on, the programs had severe memory leak problems. That was using Ada on the IBM RISC/6000.

#51

Hi;

this made me remember PACKING and TRY AGAIN in the HP41 series. Interesting: I never used malloc() when programming in C/C++ because I never had to allocate memory after the initial variables were defined/created. In fact, I have always programmed in C/C++ with a predefined memory space to be used.

Time to go ahead learning some more...

Cheers.

Luiz (Brazil)

#52

Quote:
In C, you call malloc(), which returns a pointer to some memory that you can use. When you're done using the memory, you have to call free(), passing it the pointer that malloc() originally returned. If you don't call free(), then the program thinks that you're still using the memory. This is called a memory leak, and if it happens frequently, or if the program runs for a long time, it can eat up all the memory in the computer.

So, that's what a "memory leak" is! I'd believed that it was a block of object code or data that was lost/overwritten during execution due to poor memory management or sloppy coding.

Quote:
In Java, you don't have to call free() or anything like it. Instead, the runtime environment keeps track of whether you're still using the memory and when you no longer are, it automatically makes it available again through garbage collection.

So that might explain in part why Java-based apps tend to be so sluggish. I use a commercial software package whose calculation engine is written in C, but the graphical user interface is in Java. Screens that have not been used in a while as other applications were performed, tend to take a very long time to respond to input when they are revisited. I had attributed that to lack of abundant RAM on my PC, requiring use of virtual memory on hard disk.

"malloc" and "calloc" provide capabilities of dynamic memory allocation that were not available in Fortran '77 and prior. Programs in those versions of Fortran, if I understood correctly, grabbed all the RAM they would need at the outset, and didn't give it up until the program was finished, or virtual RAM was used. I would also believe that dynamic memory allocation made recursion practicable: C and Fortran '90/'95 have them; Fortran '77 didn't.


Edited: 28 July 2010, 1:41 a.m.


#53

Hi Karl,

Quote:
So that might explain in part why Java-based apps tend to be so sluggish.

Partly, yes. The other big performance hit with Java is the fact that it is an interpreted language. I'm no expert, but I've heard that newer java runtime environments will compile sections of the code on-the-fly to improve performance, so you may want to see if your JRE is recent and fast.
Quote:
"malloc" and "calloc" provide capabilities of dynamic memory allocation that were not available in Fortran '77 and prior. Programs in those versions of Fortran, if I understood correctly, grabbed all the RAM they would need at the outset, and didn't give it up until the program was finished, or virtual RAM was used. I would also believe that dynamic memory allocation made recursion practicable: C and Fortran '90/'95 have them; Fortran '77 didn't.

You are partly correct. To make recursion possible, you need a stack, which provides a limited way of allocating memory at run-time in the form of local variables per-call return addresses. The RPL calculators have this in the form of the return stack. Malloc()/calloc() on the other hand let the program allocate memory at any point in program execution, so they are much more flexible (and dangerous as a result).

Dave

#54

During I chat I had with a programmer who had grown up totally within the Windows generation, I was aghast to find he didn't know how many bits were in a byte! When I pushed him on this, he replied, "why do I need to know?".

That sums it up for me.

There just isn't the need to know or understand low level details these days and no doubt, techniques which might use bitfields and such could well be discouraged for "code readability" or the economies afforded by the bit aren't important or relevant in C++ etc.

So is this a problem?

Well, we might be gradually losing the skill to build code akin to the most finely crafted miniature watch but we are gaining applications that enable anyone to produce a hit record on an average laptop or produce films that would have been the preserve of professional establishments a few years ago and so on.

I'd consider that progress!

Mark

PS: I understand with the AGC that once it was built, they dunked the whole thing in epoxy to seal it up! There's something about that which always makes me laugh!


#55

Quote:
During [a] chat I had with a programmer who had grown up totally within the Windows generation, I was aghast to find he didn't know how many bits were in a byte! When I pushed him on this, he replied, "why do I need to know?".

That programmer will be very confused indeed when he tries to store 256 in byte. Or maybe even 128.

For readers who don't twiddle bits, a byte can store 256 unique bit patterns. These usually represent the numbers 0-255, or -128 to +127. Attempting to store a number outside that range will result in an error, or, more typically, a different number will be silently stored in the byte.

Dave

#56

Quote:

There just isn't the need to know or understand low level details these days and no doubt, techniques which might use bitfields and such could well be discouraged for "code readability" or the economies afforded by the bit aren't important or relevant in C++ etc.


I can assure you that the need for bit twiddlers is alive and well in the machine control area wherever Programmable Logic Controllers (PLC's) are used.

We use bits and words like they were going out of style. We have to understand signed vs. unsigned, etc. and even have to deal with big endian vs. little endian when connecting hardware made by various manufacturers.

We connect analog signals into hardware inputs with varying bit A/D sections and have to scale those numbers back into engineering units for display to the machine operators.

Although, some of the machine controls manufactured by Siemens have now become the target of a Worm so I'm sure we'll be assimilated into the Borg of Computer programming before long and all this ancient knowledge will be relegated to the threads of obscure news groups frequented by dinosaurs and ghosts.

#57

We used the kind of line-by-line checking of programs that Don describes in a computer programming class at the University of Minnesota in 1961. We were using a RemRand 1103. We prepared our programs on paper tape and presented the tapes to a computer operator. If our program ran satisfactorily we received the output data on another paper tape which was read out on a flexowriter.

The people who were using the 1103 for more serious things considered the use of the machine to support a class in programming as a necessary nuisance for a machine owned by a university. Our machine time was limited. Our class of 30 students was broken up into teams of three individuals. For the final exam each team wrote a program and submitted it to the computer. If it ran successfully the members of the team were given an A. If it did not run successfully their next submission could not yield a grade higher than a B, and so on. So you can see that careful off-line checking of our program was a necessity.

I left the university that summer and rejoined Honeywell as a field engineer on the H-317 inertial system for the Fairchild SD-5 drone. The computer was the M-252 manufactured by Hughes. It was a drum machine in which multiplication and division took many word times. If you are interested in more details you can read about the M-252 at http://www.ed-thelen.org/comp-hist/BRL61-h.html. Efficient programming required that division be avoided like the plague and that the machine was kept multiplying while additions and subtractions were going on in parallel. The user could scale the result of a multiply or divide by a premature readout. That kind of programming was truly an art.

I could read code fairly well but was not very good at writing code. I described one of my mistakes in a thread on "My Greatest Computer Fault" back May 2005 -- http://www.hpmuseum.org/cgi-sys/cgiwrap/hpmuseum/archv015.cgi?read=72743. That was a case where careful line-by-line checking failed to reveal that there was an error in concept not in code.

That was a long time ago. One of the irritants was in the use of hexadecimal notation. Hughes used a through f for 10 through 15 in their documentation and with the flexowriter delivered with the computer. The Army used U through Z for system input and output so we had to be able to jump back and forth between the two notations. That was long before TI and HP offered their programmer models. We did our conversion between decimal, octal and hexadecimal on a Friden or with a set of conversion tables.


#58

Palmer, your experience using a computer at university predates my experience doing the same thing by about 10 years. I used an HP-2000 minicomputer at University of Louisville in the early 70's. We wrote and ran BASIC programs using that system. I well remember my very first program, it determined the Koppen climate classification code for a region based on things like average precipitation, temperature, level of humidity, and things like that. My geography teacher was so impressed with that program I think he gave me an A mostly because of it. We also used paper tape to save our programs.

The book that is the subject of this thread has a very good section where the author describes how computing was done in the early days of computing, the 50's and 60's. The Apollo Guidance Computer had an instruction word that was 15 bits long (not including the parity bit): 3 bits for the op code (allowing 8 instructions) and 12 bits for the operand code (allowing the addressing of up to 4096 words). The problem was that the final computer needed 41 instructions and 38,000 words of memory to be addressed. How the designers achieved that, given the hardware, was a very interesting story. It's a great book.


#59

Don:

If you include mechanical analog computers my experience goes back to 1951 and the Mark 1A fire control computer used with the 5"38 caliber gun fire control system. While assigned to the fire control technician school I also worked as a draftsman making overhead projector images for use in classes on one of the first non-mechanical systems.

When the Fairchild drone project was cancelled in late 1962 I worked in-house on the Dynasoar program which was a idea similar to what later was the shuttle program. The system computer was a Verdan DDA manufactured by Autonetics. All I can say about the DDA mechnization is that it was a foreign to my way of thinking then as RPL is now.

I left the Dynasoar program in mid 1963 to go on the Blackbird programs. The system on the YF-12 used a Hughes computer similar to the M-252 but with extended capability to allow it to do more than just the inertial navigation problem. The lead programmer on both the SD-5 and YF-12 projects was Bruce Robinson of Hughes -- a really, really bright guy.

On the A-11 we used a Honeywell computer which was similar to a computer that had originally been designed for use on another classified program. In 1966 we used those computers as part of an ESG based navigation system.

Palmer


#60

Boy, Palmer, you do go back a long way. Did you ever meet Grace Hopper in your career? I never did, but I would have liked to. I think she was a brilliant lady.

I know I read somewhere about the Dynasoar project recently, but I can't find the reference.

I feel the way you do about RPL. I appreciate its power, but I like the simplicity of RPN. And if you are going to approach this as a hobby, which I do, simplicity is the way to go. Katie got me interested in the 32s recently, and I got one and I have to say that I really like it. Of all the RPN calcs I have programmed, the 32s has become my favorite now.


#61

Don,

the advantage of the 32S in programming is readability. IMO it was the first HP scientific featuring function name display in browsing instead of key codes. This eases debugging a lot. And the 32S sports a very clean keyboard.

Enjoy,

Walter


#62

Display readability, function names instead of keycodes when stepping through code, and clean keyboard. Yep, I like 'em all. Also, rich programming feature set, speed of execution is pretty nice, base conversions on a 36 bit word with easy shift right/left to see the whole result, 390 bytes of user memory, great manual, plus it just looks professional.

Katie's right, it's about as good as it gets.

#63

The HP-41 allowed for browsing programs by function name instead of keycodes (row-column). On the old, 7-segment LED models, keycodes were the only possible option; although the HP-97 offered funtion names and abbreviations on printed paper (IIRC). A minor improvement was the "key phrases" from the HP-25 on, where keycodes were "merged" in one line when multiple keystrokes were needed for a function (i.e.: STO + 05, f SIN, etc.).

When the displays become alphanumeric with the HP-41, function names appeared. However, a little after HP-41 time, Voyagers using LCDs (HP 10, 11, 15, etc) still used a 7-segment display, hence keycodes were unavoidable for them.

#64

Don:

Quote:
Boy, Palmer, you do go back a long way. Did you ever meet Grace Hopper in your career? I never did, but I would have liked to. I think she was a brilliant lady.

I'm sorry to say that I did not cross paths with Grace Hopper. That's not surprising. I suspect that she was working on newer technology at the time that I was working on the Mark 1 and 1A gunfire computer where the original design preceded WW II. You can read about that at http://en.wikipedia.org/wiki/Mark_I_Fire_Control_Computer. One of the relatively new things at the time that I was working with it was an increase in range rate capability to handle post WWII aircraft speeds. It could handle significantly larger negative range rate than positive range rate the idea being that it was more important to track and fire on incoming aircraft. The stable vertical for the system used the old foot high mercury vapor tubes and a Rube Goldberg system of weights and screw drives to provide the precession required to compensate for earth rate.

I will try to find something on Dynasoar. It was a lifting body device. The program was cancelled in late 1963.

Palmer

#65

Quote:
I admit abacusses (abacusi?) would be a bit much, but I'd bet that some of the slide rule fanatics among our little group would say that a new engineer would be a better engineer if he/she understood what a slide rule did, in the years prior to the HP-35.

The slide rule fanatics might say that. It wouldn't make it true. Just like spending an hour in school with an abacus would be interesting, so would an hour or two with a slide rule. Any more than that rapidly becomes a waste of time.

You do realize that the first engineers to start a career after the scientific calculator was invented, have pretty much all retired? The majority of engineers working now had scientific calculators before they started college. (I started using a scientific calculator before age 12, and they were widely available very cheaply by then even if my teachers had no clue.)

Quote:
On the software side, when I went to programming school (1968) we learned to "desk check" our code, which meant following the code, instruction by instruction, on paper with some test data to make sure it really did what you wanted it to.
...
Desk checking just means doing this manually before running it, and I don't think that is taught today in programming classes, and maybe it should be. There was an old saying my boss used to repeat: don't use the computer to debug your program. I think that is still good advice.

A colossal waste of time. This manual work would have to be done after every change to the program. The modern equivalent is writing unit tests that when run by the computer put test data thru the code and validate the result. Even those don't get done, because programmer (software engineer) time to create the unit tests is very expensive compared to computer time and test person time to find functional problems.

Unit tests can be run automatically on every build to catch regressions rather than requiring a manual "desk check" which is prone to human error.


#66

You have a lot more faith in computer tests finding all errors in code than I do. Even automated test systems can only find errors they are programmed to find. The fact is that, even today, it is possible for a programmer to make a mistake and have a bad line of code that may cause disastrous consequences if a certain set of circumstances occurs, and all the computer testing in the world may not find that error, but desk checking may (and it may not also). I wouldn't call it a waste of time; I'd call it doing everything you can to make sure the code is right.

I think the first line of defense against software errors is the programmer him/her self. The person who wrote the code should verify that it does what it was intended to do, and desk checking should be part of that process. The second line of defense is peer review of the code. Then the test organization gets involved to make sure the original system requirements are being satisfied. Unit tests, module tests, regression tests, build tests, integration tests, all have their place. The more tests you have, the more likely you will have bug-free code. Sure, it's expensive, but consider the consequences if the code fails, especially in areas like air traffic control where lives are literally at risk.

I'm all for automated test systems, intelligently used. But I want the original programmer systematically looking at his/her code and verifying its correctness.

Edited: 26 July 2010, 3:52 p.m.

#67

With the surplus of disk, memory, etc., current programmers write tests as part of the complete development project (a.k.a test driven development). The requirement is that all tests complete successfully before checking code into the shared repository - which triggers a new build. Breaking the build results in a usually ribald sharing of commentary from ones peers ;)

#68

Quote:
Desk checking just means doing this manually before running it, and I don't think that is taught today in programming classes, and maybe it should be.

We called it a dry run.

The thing that I find frustrating is that a load of effort has been put into developing programming languages and programming styles that seek to eliminate errors before they can be made e.g. the automatic garbage collection in Java, but precious little effort has been put into standardising 'display environments' (for want of a better phrase). So, back in the days of glass teletypes, most screens worked to VT100 as a minimum and you could safely code assuming that. Then graphical terminals came along and X windows was developed but the whole Window manager business (e.g. Motif v. XWM) caused fragmentation and a standard was never reached.

Then the Web came along and the horror that is Internet Explorer. All the time, effort and cost savings gained by using Java were immediately squandered many times over by the difficulties of trying to code around the bugs in the various versions of IE.

Sometimes I really do think that Microsoft have held back the economy more than advanced it, because of their poor software. No doubt in a few years time, some economist will analyse it and pronounce one way or the other.

Things are better now with HTML4 being widely implemented properly but did it really need the best part of 20 years to get there?

#69

Interesting that the look inside seems to have the entire book...

- Pauli


#70

Paul, you can see quite a bit, but not all pages. The "look inside" feature of Amazon makes selected pages available, but not the whole book. For instance, page 106 is not available (the index says that the famous 1201 alarm is discussed there, but you can't see that page).

Don

#71

Seems to work nicely in Australia and the USA. Here where I live, I can't see anything between page 17 and 420 :(


#72

Walter,

If you have an Amazon account, just sign in and you'll be able to see more pages.

Thanks, Don, for the book recommendation.

Regards,

Gerson.


#73

Gerson,

thanks for the hint, though it doesn't work this way here. Being logged in I won't get neither the first part of the TOC nor the pages mentioned above. Anyone in Europe seeing more?

#74

The InfoAge center in New Jersey has one of the Apollo Guidance Computers, along with all sorts of other interesting stuff related to computers and radio. It's on the site of the old Marconi receiving station.


#75

Hello!

Quote:
The InfoAge center in New Jersey has one of the Apollo Guidance Computers...

I think it has been referenced on this site before, but you can have your own Apollo Guidance Computer (simulated) on your PC or Macintosh: http://www.ibiblio.org/apollo/.

One thing the new generation of engineers and programmers has hopefully learnt is making decent man-machine interfaces. The flight management computer that I have to use at work (http://auto.manualsonline.com/manuals/mfg/honeywell/gnsxl.html) looks and feels much more similar to the apollo guidance computer than to my Macintosh at home.

Regards,
max

Edited: 24 July 2010, 3:23 p.m.


#76

I wonder when the people who make FMS computers are going to leave the Apollo-age!

As I have begun to read this book today, I can't help but think of the astronauts who had to learn how to use this system (they had no choice; it was the only way to get to the moon). Now, they were undoubtedly used to entering octal digits in their transponders, but learning how to use the AGC must have been pretty foreign to them, with all its "programs" and "nouns" and "verbs" and "program alarm codes." They were probably thinking "just let me fly the damned thing."

Don

#77

Quote:
I'd bet that some of the slide rule fanatics among our little group would say that a new engineer would be a better engineer if he/she understood what a slide rule did
Yes. The slide rule gives a better understanding of number relations-- not that you have to keep using it, but I still benefit from having gotten proficient at it even though I haven't used it in many years.

Quote:
Quote:
On the software side, when I went to programming school (1968) we learned to "desk check" our code, which meant following the code, instruction by instruction, on paper with some test data to make sure it really did what you wanted it to. We were taught to do this even before running our program on the computer with real data, and we almost always found errors that otherwise might have slipped by us. I don't think the current generations of programmers are taught that, and despite huge advances in the software development and testing fields today, errors can still slip by.
The reason you had to desk check your code before running it on the computer was because it was expensive and time consuming to run it on the computer. These days, it's much faster and easier to run the real data through the program itself to see if it gets the right answer. So the purpose for desk checking has largely gone away.
When I started in school on the mainframe computer in the late 1970's, you had to write your code out by hand, then go to the machines where you punch it into the cards, then submit your card pile to the operators, then come back sometime later hoping they had run your program, only to find a long printout of all the reasons it wouldn't run. Turnaround was anything but instant. But, continuing:

Quote:
Quote:
There was an old saying my boss used to repeat: don't use the computer to debug your program. I think that is still good advice.
Ask yourself "why is this good advice?" The computer is the perfect tool to debug your program and I think you should use it whenever possible.
I would change the advice to "The computer should not be the only thing you use to debug your program." Debugging needs to be like a concurrently running mental task, part of the process even before you run anything. Working by myself for small, low-budget outfits, I have found that when the lack of expensive debugging tools actually teaches you to write better code. You can't have the attitude that "I'll whip out this code in record time and debug it later." Out of necessity, I've become more structured and neat in my programming, documenting everything thoroughly, making the code as readable as I know how, and proofreading.

Ten years ago, large companies finally started seeing the value in this, and the industry magazines ran some articles on code inspection and having committees of the programmers' peers proofread the code. I sometimes catch bugs when further commenting code that's already working but not exhaustively tested yet. I comment as if trying to explain it to someone else who hasn't been following my train of though on it. (If I need to change it a year later, I'll need the comments anyway.) As a result of this madness, no user has ever found a software bug in any product or automated test equipment I programmed. The projects have ranged from 700 to 10,500 lines of code, and have always been for control of equipment, quite different from desktop applications or data processing.

BTW, using a lines-of-code-per-day benchmark of programming performance is a sure way to end up with inefficient, buggy code that's hard to figure out and fix or modify later. I once worked with a programmer who typed non-stop. I always wondered what he was typing. After he left the company, I had to fix a 4-page routine he wrote. When I was done, it was down to a half page, bug-free, clearer, faster, and did more with less memory. Eventually most of what he wrote had to be redone.

Quote:
More important than automated testing systems are the large standardized libraries of working code that handle much of the drudgery that was hand-coded over and over again. Many of the common bugs that were made in the past gone now.
It just frees you up to advance to the next level of programming and therefore the next level of bugs. ;) As long as there's programming, there will be bugs of some kind.

Quote:
Still, programming is, to a large extent, still in the dark ages. Ironically, it's about the ONLY discipline where you still work mainly with flat ASCII files.
That's a good thing. I can express myself most clearly and accurately in writing, and I hope the keyboard never goes away. I do of course use INCLude files, which can be nested, so it's not flat in that respect. I also like the DOS/ANSI characters above 127 for drawing diagrams and tables with smooth lines in the source code, as well as for the Greek letters, special symbols, etc. that we use all the time in engineering.

Quote:
During I chat I had with a programmer who had grown up totally within the Windows generation, I was aghast to find he didn't know how many bits were in a byte! When I pushed him on this, he replied, "why do I need to know?"
It's like people in Washington not having a grasp on what a trillion is, or even the cost of a single dollar. Bloatware anyone? Of course it matters! And if you're writing applications for embedded control of equipment where you have to know the ports and various hardware resources intimately, there's no way around it.

#78

Good points, Garth.


Possibly Related Threads…
Thread Author Replies Views Last Post
  Use 82240B as computer printer R. Pienne 3 2,406 12-09-2013, 12:19 PM
Last Post: Marcus von Cube, Germany
  OT: Jeppesen E6B Wind-Easy Computer (Slide Rule) Eddie W. Shore 18 6,901 10-12-2013, 03:26 PM
Last Post: George Litauszky
  Irrationality in numbers....the book Matt Agajanian 4 1,860 08-30-2013, 04:14 PM
Last Post: Matt Agajanian
  Computer-scientist functions on HP Prime? Michael O. Tjebben 12 3,846 08-22-2013, 06:59 PM
Last Post: plivesey
  Paper on the errors in HP Calculators vs some older computer processors Les Koller 3 1,829 08-19-2013, 12:37 PM
Last Post: Mike Morrow
  [HP Prime CAS] No Pretty Print (Text Book) CompSystems 1 1,225 08-17-2013, 12:41 PM
Last Post: CompSystems
  A good general mathematics reference book? Chris Smith 11 3,522 04-29-2013, 09:44 AM
Last Post: Chris Smith
  The "blue book" and what the figures tell Walter B 29 7,664 03-25-2013, 08:05 PM
Last Post: Walter B
  HP-9100 used in computer-based education (video) Juergen Keller 5 2,067 12-07-2012, 11:04 AM
Last Post: hpnut
  Book on repairing HP calculators? Dave F 24 6,223 11-23-2012, 06:46 AM
Last Post: Bart (UK)

Forum Jump: