Hi, all:

The original thread was already quite deep and I want you to consider this. In a nutshell, **Palmer O. Hanson, Jr.** was using the Nth-order Hilbert matrix to test the respective accuracies of assorted brands and models of calculators and computers, by computing its determinant for orders 7 to 10, and then analyzing the results and drawing bold conclusions from them.

And then, after you posted a number of results and conclusions, **Rodger Rosenbaum** posted this:

*"The calculator couldn't get that result even if it
could do *perfect* floating point arithmetic, because the matrix it's starting with *isn't* the Hilbert matrix."*

and I absolutely agree with him because he's absolutely correct: you're __*not*__ computing the determinant of a Hilbert matrix to begin with, but an approximation to said matrix, because terms such as 1/3 and 1/7 are represented internally with different accuracy (i.e.: different values) in different calculators.

As the *initial* matrix being used is *not* the same, and as precisely the Hilbert matrices are * extremely ill-conditioned*, meaning that the *smallest* change in the input brings out a *large* change in the output, it's fairly obvious that __the results can't be compared__ because, by definition of ill-conditioned matrices and by the fact that the initial matrix is extremely ill-conditioned, you would get *different * results even __in the very same machine and using the very same program__ if you were to start with terms such as 1/3 being initially stored as 0.3333333333, then as 0.333333333333. Try it.

On the other hand, Palmer's original idea of using some suitably large, difficult (read "ill-conditioned") matrix is inherently a good accuracy test, as it requires so many arithmetic operations, and many of them carried near the limits of internal accuracy, where errors are usually largely amplified by the combined effects of the finite accuracy of the initial values' internal representations *and* the choice of basic arithmetic algorithms.

What to do ? The best of both worlds, namely:

- Let's try and use a suitably not-too-large, difficult matrix ...
- ... but let this initial matrix be
__exactly representable__, so that the very same matrix is actually processed in all machines, and so that the results are indeed comparable and really shed some valid light on the respective accuracies.

To that effect, I propose we repeat all tests using the **"Albillo's Matrix (tm)"** ( :-)) that I've carefully crafted for this thread, i.e:

58 71 67 36 35 19 60which is a

50 71 71 56 45 20 52

64 40 84 50 51 43 69

31 28 41 54 31 18 33

45 23 46 38 50 43 50

41 10 28 17 33 41 46

66 72 71 38 40 27 69

*random*7x7 matrix (so that even the HP-15C can find its determinant, 8x8 would be too large) with quite

*small*elements, yet suitably difficult indeed, as you'll see.

Of course we could have used the Hilbert matrices, multiplying all elements by some large value in order to make sure they were integer to begin with. But as you can easily check, this results in a matrix which has both very large elements and very small ones at the same time, i.e.: *very unbalanced*, which is not a fair test either. On the other hand, my 7x7 "Albillo's Matrix" consist entirely of small, two-digit integer values, *perfectly balanced*: __all the elements are of the same order of magnitude__.

Try it with all the machines you can. In exact arithmetic, its determinant should come out as **1**. What do you get instead ? What's the relative error ? You might be surprised.

Best regards from V.

*Edited: 26 Apr 2005, 10:48 a.m. *