I don't read all the messages here, so someone might
have already found this problem.100 enter
0
x,y > angle, rResult:
90
100All is well...
BUT!
100 enter
0
x,y > angle, rresult:
270
100That looks like a bug to me, though perhaps
someone with more knowledge of mathematics
might see a use for it.Further, if you store the 0 in a variable
then RCL it into the x register before the
polar conversion, the result is again
incorrect.My 33s is a recent purchase, s/n CNA41502062
Steve.
HP33s bug in polar conversion


« Next Oldest  Next Newest »

▼
07252004, 02:55 AM
▼
07252004, 12:28 PM
My 33S does the same thing. But my 32SII doesn't; if you convert (0,100) to polar coordinates, the 32SII returns the expected (100,90). Must be a 33S bug.
07252004, 02:28 PM
(NOTE: This is an extensivelyrevised post. My first analysis was incorrect.) Steve  Congrats! You found a bug, all right. Taking it a step further:
keystrokes: 100 ENTER 0 +/ None of the HPdesigned calcs I tested perfrom like this. The root of the problem seems to be that the KinHPo 33S contains an erroneous algorithm for "arctan2" of (0, y). Obviously, arctan2 (0, y) should be performed by exception: Theta = 90 deg if y > 0; theta = 90 deg if y < 0, and theta = 0 should be returned if y = 0. On the 33S, try any (0, y) or (0, y) coordinate with y not equal to 0; or, any (x, 0) coordinate with x less than 0. Repeat the sequence ">theta,r" followed by ">y,x" over and over. You'll see a rotating sequence of answers which aren't always the same! Some are correct, some are not, and some are not even valid because
(magnitude, theta) = arctan2 (x, y) There is a matrix of 9 cases for calculating angle using "arctan2" with real numbers:
X My guess is that the 33S is lumping the X=0 and Y=0 cases (except 0,0) into the nonzero cases.
 Karl S.
Edited: 25 July 2004, 5:23 p.m. ▼
07252004, 06:46 PM
I tried these polar conversions on the HP 9s and everything is correct. So, KinHPo knows how to do these calculations but appears to have made mistakes with the 33s. Hopefully, someone will feed this back to them. Regards, John ▼
07252004, 09:38 PM
HP had no obvious place on their website for submitting a bug report, so I used the 'customer enquiry' form. I don't know if this will get anywhere. Maybe someone with a closer link to HP should pass it on. Steve.
07252004, 09:33 PM
That's a fine analysis, Karl, and thanks for reminding me that the angle is named 'theta'. :) Here's another interesting point to contemplate. If you store 0 in a variable and RCL it to the x register you get the same incorrect result. The RCL'd value doesn't display as 0 however. Since zero stored in a variable means that the variable is empty, this implies that the 33s has an index table for the variable list, and that each entry in the table includes a flag that indicates a negative number. This flag remains attached to any value RCL'd from a variable. I've done a few little experiments with 0 and apparently whenever the 33s changes a 0 into a 0 it drops the  sign but doesn't change the 'negative' flag associated with the value. If I was a programmer at kinpo this is where I'd fix the bug. I fixed the conversion problem inside my program by adding 'x=0? ABS' just before the conversion to theta,r If this test is always done immediately prior to the conversion to theta,r then repeated cycling through the conversions seems to work OK. Steve. ▼
07252004, 09:54 PM
Congratulations for your finding!! My 2 cents: If you maintain the "negative zero" in the stack or in the LAST X register, it keeps its "negativity". However, doing a X=Y? test with a "positive zero" yields "yes" as an answer! (at least on my limited testing trials it seemed so) ▼
07262004, 02:00 AM
Hi Andres, Thanks for the congratulations, but I wasn't looking for a bug. I just wanted the function to work! As a programmer I know how difficult it is to completely avoid bugs in software. But I'm disappointed at this bug. Polar coordinate conversion is a basic function and there's no excuse for such a defect. How much should I trust the other 33s functions? Steve.
07262004, 12:16 AM
Steve  I believe that there are several problems with the algorithm. As you noted, behavior is different for x = 0 vs. x = +0 for y not equal to 0. However, the "abs" fix will not cure the malfunctions. As my discussion indicated, errors occurred when either x or y was +0 or 0, except for when x > 0 (Quadrants I and IV). It is erroneous logic in the "atan2" function in the conversion, perhaps related also to the signofmantissa "flag".  Karl S. ▼
07262004, 12:51 AM
Hi Karl, Yes, I realised that after I'd posted the message. As you say, atan2 is definately faulty. That's very disappointing, since it means the polar conversion functions on the 33s are essentially useless (unless we know in advance that our inputs to the functions are within a suitable range). How would you suggest we work around the problem? Does this mean that our faulty calculators will be valuable in 20 years time? hehe... I'm not going to buy 20 of them and keep them in storage, though. Will there still be an HP calculator fan club in 20 years time? Steve.
07262004, 01:53 PM
( 1.675058  1.675058 ) = 2.22045E16 according to Xcel 2002 when the formula refers to two other cells as in the cell entry is: = ( B3  B4 ). Note the parenthesis. This bug does not occur without the parenthesis.
▼
07262004, 09:05 PM
Quote: That's comforting, thanks. hehe It's also a basic and inexcusable bug! Steve.
07272004, 12:29 PM
Hi Unspellable, Could you delineate the Excel bug a bit more? I tried to duplicated it, but couldn't. I typed 1.675058 into two separate cells, and then made a third cell an equation: =(B3B4) Returns zero. Also tried it in one cell: =(1.6750581.675058) Returns zero. What am I doing "wrong"?
▼
07272004, 02:26 PM
Bill, The following is copied from my Xcel sheet. Excel 2002 (10.6501.6626) SP3 A B C 1.675058 1.675058 = ( A1  B1 ) 1.675058 1.675058 2.22045E16 Try with and without the spaces in the formula. As I recall it is not sensitive to the spaces but is sensitive to the parenthesis. I turned this over to our computer help organization. They admitted it was a bug, but I have yet to hear back from them on any causes. I'm not sure if it is sensitive to the Xcel version. Meanwhile, you might amuse yourself by trying the following problem which came up in the lab: Put the following numbers in a column:
15000000010 and so on counting down by one to 14999999990 for a total of n = 21. Now ask Xcel for the standard deviation of this column of numbers. Try the same problem on your HP48SX. The HP will come through with flying colors. Xcel will give a nonsense answer. In this case I have figured out the cause of the bug. See if you can do likewise. This bug is present in Xcel 1997 and Xcel 2002. (I have missed my calling. I should have been a software tester. I can break any piece of software in ten minutes.)
▼
07272004, 03:11 PM
The standard deviation problem may not technically be a bug; it is a well documented limitation of the formula used by Excel (and by most calculators). The formula involves squaring each of the entered values. If very large numbers are entered, then there is roundoff error in the squared values. If the entered values are both very large and very close together, then they may appear to be identical after roundoff. So the calculated standard deviation will be incorrect. This issue also affects most HP calculators (but not the 48 series). It is explictly acknowledged in some HP calculator manuals, including the 11C, 32SII, and 33S manuals; these manuals also describe workarounds. I doubt that Microsoft is so helpful, however. ▼
07272004, 04:51 PM
Norris, You are onto the reason Xcel gives a nonsense answer. How ever, I would consider use of this particular algorithm to constitute a bug or at least an example of bad programming since while mathematically correct it is a stinker from a numerical computation standpoint. It's not really the standard algorithm either. The algorithm I learned in statistics class is not subject to this problem and is the one I usually find in text books. It is also more intuitively obvious. There is absolutely no reason other than computational ignorance to use the algorithm that Xcel uses. I am no expert programmer, but I know better than that. Bottom line is that Microsoft is essentially a bug factory. I have a very good article on why this is so and what it portends for the future of Microsoft and computer programming. I'll have to dig it up and quote the reference here. For those in a hurry it was in Analog a few months ago.
07272004, 04:55 PM
PS. I don't know where the problem is mentioned in Microsoft user's information. While it may be well known among users who tune into bulletin boards on Xcel I'd never heard of it until I ran into it. ▼
07272004, 05:10 PM
I can understand why the formula in question is used by most calculators. But I don't see why Excel uses it. At the very least, Microsoft should do a better job of acknowledging the issue. HP covers it thoroughly in the manual that comes with the $55 HP33S. ▼
07272004, 06:23 PM
Why use this algorithm in any calculator? The numerically good formula is just as simple to implement and does not have this problem. Maybe there's a programming issue here I'm not aware of? I'll dig out the good algorithm when I get home tonight and post it here tomorrow. Internet software is not a product of Microsoft and is much more robust. This point is extensively covered in the article I mentioned. not that I don't encounter bugs when looking at this site, but they are probably not due to the software specific to this site. More likely Windows, or even more likely Microsoft Internet Explorer. ▼
07272004, 08:00 PM
The formula used by the HP48 (which is not subject to the "bug") requires that you subtract the sample mean from each entered value, then square the result. Unfortunately, you can't calculate the sample mean until every value has been entered. So you can't do the subtraction as you enter the data  you simply don't know what to subtract. In theory, you could work out the standard deviation by entering all of the data twice. You could enter it all once to calculate the sample mean (xm). Then you could enter it again and determine (x  xm)^2 for each value. But entering the data twice is obviously inconvenient. Alternatively, you could store each value as you entered it. Then you could calculate the mean from the stored data, and then subtract the mean from each stored value. The HP48, which essentially has unlimited data storage, uses this approach. However, this method is not effective for most calculators, because they don't have unlimited storage. A 32SII, for example, has only 33 data registers. Even if you were prepared to consume them all, you wouldn't be able to calculate standard deviation for more than about 30 values (you would need to keep a few registers available for the calculation). So the 32SII uses a different formula. The downside, as we've seen, is that it is subject to roundoff error with large numbers. The upside is that it can handle an essentially unlimited number of samples while using only six storage registers. So much for the 32SII. However, I can't explain why Excel uses the 32SII approach, instead of the HP48 approach. ▼
07282004, 12:37 AM
Quote: It's not round off error causing the problem, it is over flow, which is entirely different. Round off error will only cause small errors unless the errors are multiplied by the properties or repetitiveness of the calculation. Overflow errors, on the other hand, are almost always very large errors. Chris W ▼
07282004, 01:42 AM
I'm blindly following the terminology used by Microsoft and HP to address this issue. Examples: ********** From http://support.microsoft.com/default.aspx?kbid=828125&product=xl2003 : "If you have a version of Excel earlier than Excel 2003, there is the potential for significant round off errors in extreme situations...If you have Excel 2003, there are no round off errors when you conduct this experiment" ********** From HP33S manual, pp. 119 and 1110: "Since the calculator uses finite precision (12 to 15 digits), it follows that there are limitations to calculations due to rounding...Executing [Sigmaminus] does not delete any rounding errors that might have been generated in the statistics registers by the original data values."
07282004, 11:09 PM
Chris posted,
Quote:
15,000,000,001 ^ 2 = 225,000,000,030,000,000,001 The result, with 21 significant digits, cannot be represented with full precision on Excel or HP calculators, but is well within the range of the floatingpoint numbers that either can represent. These statistical summations are performed in the domain of floatingpoint numbers. Thus, it is indeed roundoff (one word) error that is responsible for the inaccuracies, since the last few significant digits must be dropped. However, if the statistical summations were performed in the domain of integers, then overflow (one word) would be the likely culprit.
Largest unsigned integers (from HP16C):  Karl S.
Edited: 29 July 2004, 11:27 p.m.
07272004, 08:20 PM
Well, what do you know? Microsoft apparently adopted the HP48 approach for use in Excel 2003. The following excerpts are from http://support.microsoft.com/default.aspx?kbid=828125&product=xl2003 ********** In extreme cases where there are many significant digits in the data but a small variance, the old formula leads to inaccurate results. Earlier versions of Excel use a single pass through the data to calculate the sum of the squares of the data values, the sum of the data values, and the count of the data values (sample size). Excel 2002 and earlier then combine these quantities into the formula in the Help files for VAR, VARP, STDEV, and STDEVP. These formulas are also known as "calculator formulas" because they are suitable for use on a hand calculator for small sets of numerically wellbehaved data... In Excel 2003, the procedure uses two passes through the data. On the first pass, Excel 2003 calculates the sum and count of the data values, and from these it can calculate the sample mean (average). On the second pass, Excel finds the squared difference between each data point and the sample mean, and then sums these squared differences...Therefore, Excel 2003 gives results that are numerically more stable.
07282004, 02:20 PM
Well the (xµ)^2 method has already been brought up here. It's the one I learned in school and find in most text books and in the CRC Concise Encyclopedia of Mathematics. I think it is the most intuitively obvious form. It's also the one that most closely models what's going on, especially in areas outside of statistics. This function in both discrete and continuous form is much used in nonstatistical areas. Now after reading some of the postings I understand why it is not used in simple calculators. No excuse for Xcel though. I have discovered a few other bugs, glitches, or whatever with Xcel. The actual real world problem in which I discovered this involved the statistics of the cold start up frequency of an oscillator with a nominal frequency of 15 GHz. It would generally start up within plus or minus 100 Hz of the nominal. It is possible that I am the person instigating the change in Xcel 2003. I first stumbled on this in 1999 or early 2000. I brought it to the attention of our in house computer help group and they said they would forward it to Micrsoft. Our company has a user's feedback arrangement with several software companies including Microsoft. We even have a "hotline" to one of the software houses (not Microsoft) and can get an updated version to fix the simpler bugs we discover within a couple of days. My handle "unspellable" came about because I broke the yahoo software trying to enter my name. In a fit of pique I choose unspellable as a handle. The local driver's license software went off the rails over the same problem. To think that 25 years ago I had a typewriter that could handle this. unspellable alias Noël Cotter
07272004, 05:55 PM
Quote: That sounds like a challenge. See if you can break the software on this web site. If you can I will fix it. Chris W
07272004, 06:43 PM
▼
07282004, 01:17 AM
In that older thread, there was an interesting post from hugh steers. He posted another algorithm for calculating standard deviation: ********** for a long time, i thought that accurate computation would necessitate the storage of all values (this is how the 48 does it). but no! a simple algorithm known in the 70's was this: n=0, s=0, m=0. loop: get x. n + 1 > n. x  m > t. m + t/n > m. s + t*(x  m) > s. goto loop. the standard deviation is then sqrt(s/(n1)). ********** So why don't calculators use this algorithm? I suspect it required too many storage registers. It needs four (for n, m, s, t). If we want to calculate the standard deviation for paired values (x, y), then we need three more registers (another set of m, s, t for the yvalues). So that makes 7 registers, as opposed to 6 on most calculators. Doesn't seem like a big difference. But those 7 registers are needed just to calculate the standard deviations (or variances) of x and y. What about mean x, mean y, the slope and yintercept of the bestfit line, and the correlation coefficient? You would need additional registers to store the data needed to calculate those statistics; I doubt that you could somehow generate them from n, m, s, and t. The neat thing about the standard calculator formulas is that you can generate all of these statistics by using just 6 storage registers. This may have been an important consideration in the days when memory was more expensive.
▼
07282004, 04:12 AM
Hi Norris, I had a closer look at how this 'new' algorithm can be modified to calculate the mean of x, the mean of y, the slope and yintercept of the bestfit line, and the correlation coefficient. In particular I wanted to see what are the register requirements for a calculator based implementation that can calculate these statistics. Some observations:
When using the new algorithm to calculate the mean and standard variation for two variable statistics, it is necessary to store n, mx, sx, my and sy (5 registers). It is also necessary to have a scratchpad register, t, for the temporary calculations, but this can be reused for both the xdata set and the ydata set.
Traditionally, calculators with twovariable statistics store the following 6 values, which are then used to calculate the mean, standard variation, slot, intercept, etc.
n, the number of elements, is the same in both algorithms.
Sigma_x = n * mx It is not possible to calculate Sigma_xy from n, mx, my, sy and sy, so a calculator using the new algorithm would need to store this sum, just as done in a calculator using the traditional algorithm. This requires the use of one extra register. Since we can calculate Sigma_x, Sigma_y, etc. from the new algorithm, we can also calculate the slope and yintercept of the bestfit line, and the correlation coefficient. So, in order to calculate the aforementioned statistics, the new algorithm requires a total of six storage registers, the same as in the traditional algorithm, plus a temporary storage register. It appears that there are more calculations required for the new algorithm, versus the traditional algorithm. If ROM space or calculator speed is an issue then this could give the nod to the traditional algorithm. However, the number of storage registers is the same for both algorithms (assuming that the temporary register can be reused for other things also) and the new algorithm gives better estimates of the standard variation (although not always as good as that obtained by the two pass method). It appears that this 'new' algorithm could certainly have been used on all, but perhaps the very earliest calculators that had ROM and speed limitations. Regards, Eamonn.
07282004, 10:39 AM
Hi Norris, The beautiful algorithm presented by Hugh Steers can be modified so that the variable t is eliminated. Only n, m, and s are required. See http://www.hpmuseum.org/cgisys/cgiwrap/hpmuseum/archv014.cgi?read=54153 Cheers, Tom ▼
07282004, 11:24 AM
Thanks Tom and Eamonn ! I suppose it would not be very difficult to program this algorithm into a 32SII, 33S, or similar model, to provide an alternative set of statistics functions. You could even use the six "official" statistics registers. A calculator so equipped would outperform Excel 2002 !
07282004, 12:07 AM
Quote: I got 6.20484 as the sample mean on a 17BII, which stores each datum; I got 0.00 on the 42S, which does not. Obviously, roundoff error for lack of enough digits explains the 42S result, as Norris has precisely explained, now and earlier. However, I can't quite see why Excel 97 gave 228.9733609 as the sample standard deviation, even if it does use the shortcut "summation method". That STDEV is far greater than even the largest variance (100 = 10^2). You and Norris are right in saying that Excel shouldn't ever have used the summation method. There's certainly enough RAM and computational horsepower on a PC for doing it rigorously.
07282004, 02:52 PM
Teaches you right for using MicroSoft products! Try interpolating one more, or one less, zero into your numbers (i.e. a string of 6 or 8 zeros in the first value), and Excel (at least the 1997 version) will give zero for the standard deviation! Take out two of the zeroes, and the result is close, and with 4 zeros (1500010) the result is correct (enough). The overall number of significant figures appears to be about 9. My old (! the executable is dated 1989) version of QuattroPro (version 2 or 3, I think  for DOS) gives almost the correct answer for these numbers, but fails rather badly for larger numbers. So, it is (was  long before these versions of Excel) using the correct algorithm, and with a total of about 11 significant figures. And, WordPerfect is FAR superior to Word! (IMHO) ▼
07292004, 01:25 AM
I have Quattro Pro 10 (copyright 2001). The online help doesn't indicate how the @STD function is calculated, but it seems to behave the same way as Excel 97 (i.e., not very well with large, closely spaced numbers). Edited: 29 July 2004, 1:26 a.m.
07292004, 01:49 AM
So you don't have any faith in Microsoft? Well, try calculating standard deviations with the Microsoft Calculator! You know, the one in the "Accessories" menu that you never use. Yes, you can do statistical calculations with it (though you'll probably have to check the help file to figure out how it's done) Version 5.1, copyright 2001, appears to calculate standard deviations correctly given consecutive integers of magnitude 10^30
07292004, 08:58 AM
i is an imaginary integer. i^2 = 1 (usually) In Xcel IMSQRT(1) = 6.12303176911189E017+i I believe this to be due to Xcel carrying out complex opeations in polar mode. cosine(pi/2) produces a rounding error in many calculators.
07272004, 10:19 AM
This same bug was found several months ago (just after the public release) and was reported in a reply to another post. In that case the author had used the program to compute square roots of negative numbers (like the ones posted around the same time, he had posted a listing) that returned a complex number on the stack in the standard 32sII/33s format. If you took sprt(4) using this method (resulting in x: 0, y: 2) and then did a polar conversion it returned an angle of 270 degrees, clearly outside the range of the function. I tried this on my calc with the same result. This is the same bug that is described by Steve. I guess this means that the 0 in the xregister is really a 0.
07272004, 12:49 PM
This bug is very disappointingespecially for its effect on complx number operations.
In the 32sii, each register "costs" you in memory. If you store "0" into a register, there is no change in memory. So, "0" is "0". In the 33S, registers are FREE  no memory to store into them!!! So, they all must be part of one big blockI don't know enough about hardware architecture, but could this be the case? If so, then why is "0" treated differently from "6" etc?
