My greatest programming fault [OT] - Printable Version +- HP Forums (https://archived.hpcalc.org/museumforum) +-- Forum: HP Museum Forums (https://archived.hpcalc.org/museumforum/forum-1.html) +--- Forum: Old HP Forum Archives (https://archived.hpcalc.org/museumforum/forum-2.html) +--- Thread: My greatest programming fault [OT] (/thread-72743.html) My greatest programming fault [OT] - Tizedes Csaba [Hungary] - 05-02-2005 :( I wrote on this weekend a solver for a hydrodinamical problem: this program searches minimum on a closed Euler-number Reynolds-number plane. I named the interval on the Reynolds-axis ReMin=... to ReMax=... I wrote it on my favourite CASIO-FX850P BASIC, and the "critical" line begun: ```50 REMIN=4.1:REMAX=4.5:... ``` It was about 20 mins, to wake up, and realise that, this line NEVER will execute, because it's really begin with `50 REM IN=...` Be clever! Not like me!!! ;) I'm asking about stories, what was your greatest blinding about problem solving?! Csaba Edited: 2 May 2005, 6:08 a.m. after one or more responses were posted Re: My greatest programming fault [OT] - Karl Schneider - 05-02-2005 Quote: I'm asking about stories, what was your greatest blinding about problem solving?! A simple, but vexing problem that took me a while to solve: In 1995, I was taking a course in "C" language, after having learned Fortran in the early 1980's. My programming assignment for class utilized the absolute-value function. My erroneous program had "abs" for taking the A.V. of a floating-point argument, and I just couldn't figure out why my code did not produce correct results. It is instructive now to point out how the A.V. syntax differs between Fortran and C: ```Argument Language Fortran C float ABS fabs integer IABS abs ``` After at least an hour of head-scratching, I corrected "abs" to "fabs", and all was well... -- KS Re: My greatest programming fault [OT] - John Limpert - 05-02-2005 Years ago, I read an article on the design and implementation of one of the early FORTRAN compilers, back when core was precious and mag tape was the only available mass storage. When the compiler read the user's source code, the first thing it did was to throw away all of the space characters, since they consumed storage and were ignored anyway. What's the difference between the following two FORTRAN statements: DO 10 I=1,10 DO 10 I=1.10 One starts a DO loop, the other does something quite unexpected. Re: My greatest programming fault [OT] - Namir - 05-02-2005 In 1979 I was taking a course in Numerical Analysis as part of my graudtae studies at the University of Michigan, in Ann Arbor. I had a assignment to use LU Decomp and forward/bacward subsitution routines from a previous assignment to solve a set of nonlinear equations. My programs kept acting very weird and would NEVER give the correct answers. When I placed print statements to trace various values of variables, they all seem to be corrupted. I finnaly talked to a student aid at the computer center who asked me to explain the program line by line. When I came to the calls of my previous-assigment routines, I discovered that I was using an array that I did not declare. Somehow, the Fortran compiler did NOT catch that error. I was having memory problems. That bug taught me that messing up with arrays will do that and helped me catch similar bugs in later years much quicker. Namir Re: My greatest programming fault [OT] - Marcus von Cube - 05-02-2005 DO 10 I=1,10 This starts a DO loop with variable I. 10 is the line number of the last statement in the loop. DO 10 I=1.10 The compiler first gets rid of all unnecessary spaces: DO10I=1.10 Than it declares and initializes a REAL variable named DO10I with value 1.10. Since all variables can be declared automatically and since a DO loop has no ending statement like END or NEXT but ends implicitly on a labeled line, the compiler never detects that there is something badly wrong. AFAIK, this is one of the most famous FORTRAN bugs ever discussed. It caused serious greef for some expensive project, probably by NASA (who nows better?) Re: My greatest programming fault [OT] - Wayne Brown - 05-02-2005 I once spent a couple of hours trying to figure out why a mainframe program was misbehaving, when I couldn't see anything at all wrong with the code. I finally realized that one of the variable names had a zero where I expected a letter 'O'... Sure enough, there were two nearly identical variables, with that one character being the only difference, and I had been using the wrong one in my modifications. (That's what happens when a lot of people with different coding styles work on the same programs over a number of years, and somebody gets careless.) My most irritating computer blunder, though, was more a hardware than a software problem (unless you want to count my mental processes as software). I was working late at night on some changes to a Linux device driver, and after compiling and linking it to the kernel I rebooted and discovered that I could no longer mount the filesystem on a CD-ROM that I needed. (The driver I was working on had nothing to do with the CD-ROM drive.) I tried playing an audio CD as a test and found that wouldn't work either. So I checked my kernel configuration and found that I had neglected to include the CD-ROM driver when compiling the kernel. So I recompiled, linked and rebooted... and still couldn't mount the CD. I spent most of the night debugging the kernel and trying to figure out what I had broken. Finally, it dawned on me (literally -- it was just about sunrise) that I had never put the data CD back in the drive! I had fixed the problem hours earlier (when I recompiled the kernel) and then spent the rest of the night trying to mount an audio CD as if it were a filesystem... Re: My greatest programming fault [OT] - Larry Corrado - 05-02-2005 One of my favorites was a bug I heard about. It seems a certain geography tutoring program "knew" the capital cities of the world. However, when the user typed in "Quito" to find what country it was the capital of, the program always terminated. The problem was... Well, you see it. Larry Re: My greatest programming fault [OT] - Dave Shaffer (Arizona) - 05-02-2005 Not exactly a disaster, but In the days of Algol, which we learned instead of Fortran at my undergraduate institution, named labels were used for jumping around in programs. Well, what do you expect from a bunch of nerds - one of the standard labels everybody used was "hell" - so you could tell your program GO TO HELL. On more than one occasion, you'd forget to actually use such a good label, in which case the compiler would non-judgmentally answer back "HELL is undefined." Re: My greatest programming fault [OT] - Garth Wilson - 05-02-2005 In one of my first embedded systems I was programming in assembly language in the mid 1980's, I still had the tendency to model my names of labels, variables, constants, etc. after the six- to seven-letter ones used in the user languages of the HP handhelds. We had a small keypad, and one variable kept a record of what keys had been pressed since the last time they were all up, allowing valid multi-key combinations. Each bit in the variable corresponded to one key. Since "pressed" takes more characters than "hit", and I wasn't separating the words in names with the underscore character, "keys hit" turned into KEYSHIT. I didn't recognize the foul language in my source code right away, but fortunately it worked better than the name implied. Edited: 2 May 2005, 6:51 p.m. Re: My greatest programming fault [OT] - Don Shepherd - 05-02-2005 COBOL program on a VAX 11/780. I made a minor change in a single line of code (I thought), then recompiled, and the compiler said I had something like 500 errors! I accused the system manager of having a faulty COBOL compiler. Then I realized I must have deleted the period at the end of IDENTIFICATION DIVISION, so the compiler thought that all of my data division and executable code was part of the IDENTIFICATION DIVISION, hence the 500 errors because none of it made sense. Re: My greatest programming fault [OT] - Palmer O. Hanson, Jr. - 05-03-2005 Back in 1962 I was a Honeywell field engineer supporting the inertial navigation system for an Army drone manufactured by Fairchild. The computer was the M-252 manufactured by Hughes. The vertical steering in inertial mode was not performing as well as was needed. A revised mechanization was programmed and reviewed by engineers at the Fairchild home plant, by engineers at the Honeywell home plant and by a Fairchild engineer and I at the test site. When the inertial steering was engaged on the next flight test the drone immediately went into a steep dive. The inertial steering was disengaged and level flight was recovered using the backup steering system. I quickly realized what had happened and within minutes telephone calls from the home plants confirmed that others had realized the same thing. We had programmed the revised vertical steering system in an east-north-up coordinate system when the drone coordinate system was north-east down and none of us had caught the discrepancy during debugging. Later in the test flight the drone went into large and uncontrollable roll oscillations. One of the Fairchild field engineers suggested that we reenter the inertial steering mode. We did that. The roll oscillations ceased and the drone went into a steep dive. We immediately disengaged the inertial mode. The roll oscillations in the backp mode did not reoccur for the remainder of the flight. Re: My greatest programming fault - Interrupts [OT] - Marcus von Cube - 05-03-2005 I started my career as a programmer of microprocessor controlled communication devices. This was in the eighties and communication meant serial interfaces. We had an application that interfaced injection molding machines to a central computer system. The operator could watch the display contents of the machine on the screen in his office. We'd installed the hard- and software at a SIEMENS manufacturing site and were proudly presenting the feature to the SIEMENS staff... ...when I recognized that the screen read ESIMENS instead of SIEMENS. I was smiling because I was pretty sure the typo could be seen on the machine as well and not only on the central computer. Of course I was wrong! What had happend? The UART interface (Z80-SIO) had a small buffer for incoming characters. The Interrupt routine read a charcter from the SIO and put it into a circular memory buffer from were it could be accessed by the application software. But I had obviously reenabled interrupts too early, just before the characters were inserted into the memory buffer. The SIO issued the next interrupt, the interrupt routine picked the character, enabled interrupts again, ... The last character in the hardware buffer thus made it first in the memory buffer, overtaking the other characters waiting to be inserted. This was a really hard one to debug (no debugger, no incircuit emulator, just looking at my Z80 assembly listing and a lot of thinking...) I rarely use debuggers even on my more sophisticated projects (C and Java). I still prefer to look at my code and send out logging info to clarify what my program is doing. Re: My greatest programming fault [OT] - Bram - 05-03-2005 NOT(TRUE) appeared to be TRUE in ALGOL. I had a variable containing an integer result from something I cannot recall. Elsewhere I needed this value as a boolean value (0 => FALSE, #0 => TRUE). Printing the integer result as a boolean yielded TRUE for its value was #0. So NOTting this value must have resulted FALSE, but the computer appearently went into the TRUE branch. Basic knowledge is: a FALSE constant is zero and a TRUE constant is minus one. The NOT operator turns a TRUE into FALSE and a FALSE into a TRUE. As an extra, values may be considered booleans as well. Zero being FALSE and anything else being TRUE. After a lot of testing I found out that the NOT operator in Algol appeared to do nothing else than inverting all bits of the argument. The zero still turns into TRUE (minus one) and minus one turns into FALSE (zero). BUT all other numbers (these are all considered TRUE) turn into their one complement representation and again are considered TRUE. I would have expected that the NOT operator, as it in fact requires a boolean argument, would convert the integer number to a boolean first before inverting bits, But it didn't. And didn't tell either. Since then I always program IF NOT(value <> 0) THEN ... or IF (value=0) THEN ... in stead of IF NOT(value) THEN ... when 'value' is an integer. Which is of course something you should do in the first place. Re: My greatest programming fault [OT] - Marcus von Cube - 05-03-2005 Quote: Since then I always program IF NOT(value <> 0) THEN ... or IF (value=0) THEN ... in stead of IF NOT(value) THEN ... when 'value' is an integer. Which is of course something you should do in the first place. C does it a little better. The "!" operator gives always a boolean value and converts the argument to a boolean before. The main difference to Algol: TRUE is 1, not -1. Guess what "-!!i" returns for a zero and for a non zero argument i! [-1 for a non zero argument, zero stays zero.] In Java, the logical operators "!", "&&" and "||" can only be used on boolean arguments, *which are not integers*!