Free42 matrix-matrix division bug



#12

Hi all,

An hour ago, I was reading an HP Journal article about the HP-15C. It mentions that the "divide" key on that machine divides matrices by computing inv(X) * Y, and points out how convenient that is for solving systems of linear equations.

I got a nasty feeling and checked the behavior of the HP-42S. Sure enough, Y / X yields inv(X) * Y. Which means that the Free42 version, which computes Y * inv(X), is wrong.

So, we get to version 1.0.6, which fixes this rather serious bug, and one not-so-serious menu glitch. As always, the goods are here.

- Thomas


#13

the 15c had some clever way to deal with ill-conditioned matrices who were close to singular. does anyone know exactly what it did. add noise?


#14

According to the HP Journal article (May 1983, page 33-34),

if a calculated diagonal element of U, which we call a pivot, is found to be zero during the LU decomposition, rather than aborting the matrix calculation and reporting the input matrix to be singular, the HP-15C replaces the zero pivot by a small positive number and continues with the calculation. This number is usually small compared to the rounding errors involved in the calculations. Specifically, it will be about 10^-10 times the largest absolute value of any element in that column of the original matrix.

The article also points out why testing for singularity by looking for zero pivots is unreliable, and why continuing despite finding zero pivots is useful in certain applications.

When I copied the LU decomposition code from Numerical Recipes in Fortran, I noticed something similar; I replaced it with a fatal error because I didn't like what the NR code did: it uses an arbitrary small value, which is not in any way guaranteed to have the right scale, and there's no explanation why this is done, either.

I think I'll use the HP-15C approach instead, now. I'll have to study the issue a bit more but when I make this change to lu_decomp_r() and lu_decomp_c(), that would probably also be a good moment to add proper overflow checking and reporting to those routines.

UPDATE -- I just checked and the 15C and 42S have another interesting property: they won't even report that a matrix is singular even if it contains nothing but zeroes. This is a different case than what is mentioned in the NR book and the HPJ article. Apparently, a zero column in the original matrix simply results in a zero column with a large number (1e99 on the 15C, 9.999...e499 on the 42S) on the diagonal position. Just wondering: does anybody know what that is for?


Edited: 10 Nov 2004, 11:19 a.m.


#15

Kahan must have been absent when that particular decision was taken.. Thankfully, from the 48 on (or even the 28? I don't know), a singularity causes an error, as it should.
Replacing it by a relatively small number and continuing just returns a meaningless result - and it is not flagged as such.


#16

For now, I've changed the behavior of Free42 to match that of the 42S. Later on, when I add a proper configuration panel, I'll add options to make zero pivots cause the "Singular Matrix" error again, and also to check flag 24 when overflows occur. (The 42S never signals overflows in matrix-matrix multiplication and division -- I don't know if that's another Kahan-wasn't-there type thing or if it's intentional, but as long as I don't fully understand why these decisions were taken, I'll hedge my bets.)

#17

That is to say - it solves X*A = Y, and returns A. Not
altogether equivalent (in a numerical sense) to INV(X)*Y.

Werner


#18

hat is to say - it solves X*A = Y, and returns A. Not altogether equivalent (in a numerical sense) to INV(X)*Y.

I stand corrected. The Free42 pre-1.0.6 version of matrix-matrix division solved AX=Y and returned A; from 1.0.6 onwards, it solves XA=Y, like the 15C and 42S.


When I was talking about inv(X)*Y vs. Y*inv(X), I was merely trying to express the nature of the bug in the briefest possible terms, but you're right that it is not the same. The computation that the 15C/42S/Free42 *actually* perform is faster and more accurate.


#19

Hi, all;

Forgive me asking for an obvious answer, but I didn't get these last comments. May I?

Fron Werner, we have:

Quote:
That is to say - it solves X*A = Y, and returns A. Not altogether equivalent (in a numerical sense) to INV(X)*Y.
I see like this:
X*A = Y   so:
inv(X)*X*A = inv(X)*Y
indent*A = inv(X)*Y
A = inv(X)*Y

Thinking of matrix algebra, if Y is a [n×1] matrix (coefficients matrix) and considering X a square [n×n] matrix (constants matrix, the one that should not be singular) then inv(X)×Y is possible, and Y×inv(X) is not. Unless you change Y dimmension from [n×1] to [1×n] and transpose X. I remember that I fought against this idea for some time, before accepting the actual order of the operands in the register stack. And it is a fact that the LU decomposition proposed for the HP15C solution (explained with some details at HP15C Adv Fcn Handbook) leads us to conclude that, in fact, no inversion occurs, and in the case the constants matrix is a singular matrix, some elements with their values set to zero in the LU diagonal have their values slightly changed (as explained by Tomas here) in order to allow the variables in the linear system to be computed.

Please, having my considerations proven wrong, I'd like to be set in the correct reasoning. I'd gladly accept any corrections (and I actually would like to read them) on my post, if any.

Cheers.

Luiz (Brazil)


Edited: 11 Nov 2004, 5:10 a.m.


#20

Mathematically speaking, solving for A in XA=Y is the same as computing inv(X)*Y. However, numerically speaking, it's different: solving for A is done by computing the LU decomposition of X, and then performing back-substitution on each of the columns of Y; computing inv(X)*Y is done by computing the LU decomposition of X, then performing back-substitution on the columns of an identity matrix to construct the inverse of X, and finally multiplying the inverse of X with Y. The latter takes a lot more steps, which means it's slower and there's more opportunity for numerical errors to creep in.


#21

Hi, Thomas;

I forgot to congratulate you for the excelent emulator. Building it from scratch is a great achievement! In fact, I should have add these congrats prior to comment about matrix division. Sorry! What you achieved with the HP42S emulator is a lot more important. Bugs are part of any design...

Best regards.

Luiz (Brazil)


#22

Thanks for your comments, Luiz!


Actually, I had designed Free42 to be bug-free, but I guess I screwed up somewhere. ;-)


Sigh... There's just so MUCH to test... Whenever I wrote something this size in the past, I had a team of full-time testers, but with a hobby project, unfortunately it's the users who wind up doing much of that task...

Edited: 12 Nov 2004, 7:16 p.m.


Possibly Related Threads...
Thread Author Replies Views Last Post
  AFTER HP-Prime update, Shift+Matrix CRASHES Joseph Ec 3 819 12-06-2013, 11:06 AM
Last Post: Joseph Ec
  HP Prime Matrix TERRIBLE bug and question uklo 19 2,151 11-25-2013, 12:10 PM
Last Post: Mic
  HP Prime: editing a matrix Alberto Candel 6 895 11-20-2013, 06:26 PM
Last Post: Helge Gabert
  Absolute Value and Matrix BruceTTT 5 862 11-11-2013, 11:52 PM
Last Post: Walter B
  HP Prime polynomial long division bluesun08 13 1,408 10-30-2013, 03:29 AM
Last Post: parisse
  WP-34S Matrix operations with routine-local registers? Tom Grydeland 1 476 09-04-2013, 10:46 AM
Last Post: Marcus von Cube, Germany
  Matrix Characteristic Polynomial - Reloaded. Ángel Martin 12 1,310 08-22-2013, 05:33 PM
Last Post: Thomas Klemm
  Matrix Richard Berler 3 576 08-18-2013, 06:24 PM
Last Post: Paul Dale
  Advantage/CCD Matrix Challenge Ángel Martin 1 467 08-09-2013, 06:22 PM
Last Post: Thomas Klemm
  Why does PEMDAS do multiplication and division before addition and subtraction? Don Shepherd 12 1,425 07-31-2013, 10:59 AM
Last Post: robert rozee

Forum Jump: