Valentin Albillo gave an example a while ago of a very ill-conditioned linear system:

"If in doubt, you may want to consider this example I've set up, where the solution of some engineering problem requires solving this small 7x7

system of linear equations, where the coefficients are the result of some measurement, say Volts, with just one decimal of precision:

1.3 x1 + 7.2 x2 + 5.7 x3 + 9.4 x4 + 9.0 x5 + 9.2 x6 + 3.5 x7 = 45.3

4.0 x1 + 9.3 x2 + 9.0 x3 + 9.9 x4 + 0.1 x5 + 9.5 x6 + 6.6 x7 = 48.4

4.8 x1 + 9.1 x2 + 7.1 x3 + 4.8 x4 + 9.3 x5 + 3.2 x6 + 6.7 x7 = 45.0

0.7 x1 + 9.3 x2 + 2.9 x3 + 0.2 x4 + 2.4 x5 + 2.4 x6 + 0.7 x7 = 18.6

4.1 x1 + 8.4 x2 + 4.4 x3 + 4.0 x4 + 8.2 x5 + 2.7 x6 + 4.9 x7 = 36.7

0.3 x1 + 7.2 x2 + 0.6 x3 + 3.3 x4 + 9.7 x5 + 3.4 x6 + 0.4 x7 = 24.9

4.3 x1 + 8.2 x2 + 6.6 x3 + 4.3 x4 + 8.3 x5 + 2.9 x6 + 6.1 x7 = 40.7

which has the quite obvious, unique solution:

x1 = x2 = x3 = x4 = x5 = x6 = x7 = 1.0 (Volts)"

His point is well made, but I haven't seen much posted here about what one should do in a case like this. The HP48G can solve his example system, and gets the right answer. But, let's perturb the first element in the right hand column matrix so its value is 45.4 rather than 45.3; now the exact solution is:

Transpose[71083, -8, 63379, 63741, 45, -63738, -133356]

It doesn't seem reasonable that such a small perturbation should cause such a large change in the solution. This is what ill-conditioning does. What can we do about this? (I'll be rounding numerical results in what follows)

The statisticians have developed methods for dealing with this problem. Let the system to be solved be Ax = b, where A is the design matrix, Valentin's 7x7 matrix and b is the column matrix. The first thing to do is to calculate the correlation matrix of the A matrix; it is:

[ 1 .481 .77 .274 -.00652 -.033 .951]

[ .481 1 .474 -.166 -.698 -.131 .391]

[ .77 .474 1 .676 -.339 .526 .909]

[ .274 -.166 .676 1 -.125 .924 .533]

[ -.00652 -.698 -.339 -.125 1 -.347 -.052]

[ -.033 -.131 .526 .924 -.347 1 .252]

[ .951 .391 .909 .533 -.0519 .252 1 ]

We see that column 3 and column 7 are highly correlated, as are row 3 and row 7. I suspect this is a result of Valentin's method of construction.

One of the things a statistician would do is delete a row or column if it's highly correlated with another, and solve the reduced system. In this case if we drop row 7 (which is highly correlated with row 3) from A and b, the system can be solved as an underdetermined system (can your calculator do this easily?). The result is:

Transpose[ 1.09 .961, 1.01 .908 1.02 1.11 .964 ]

This result is much more reasonable and is actually not too (numerically) difficult to solve, because dropping the 7th row from A gave a matrix with a condition number of 189 (using the 2-norm) instead of the 3.17E12 condition number of the original A matrix.

Now for a small challenge. It is possible to perturb only two elements of the original A matrix, adding .1 to one and subtracting .1 from one, giving a result which is only a distance of .1414 (in R[7x7] hyperspace; subtract the two and compute the norm of the difference) away from the original. The perturbed matrix has a condition number of 399. Can you (and your calculator) find the perturbed matrix? If we solve the system with this matrix, get get a result:

Transpose[ 1.6 .961 1.47 1.37 1.02 .651 -.0000167 ]

This time we solved a full rank system, not an underdetermined one. The result is more reasonable than the unperturbed system, but we can do better.

Let's find a matrix which is a distance of .1 (in hyperspace) from the original, using at least 6 digits (up to 12, perhaps) to perturb ALL the elements of the original A matrix. The matrix of perturbations should have the minimum size (norm) to generate a new matrix just about .1 (in hyperspace) from the original. This perturbed matrix should have a condition number of about 399, and gives a full rank system which can be easily solved to give:

Transpose[ 1.09 .961, 1.01 .908 1.02 1.11 .964 ]

the same result we got when we deleted row 7 and solved the underdetermined system above. Can you and your calculator find the required perturbation matrix?

If we perturb elements of the Valentin's original A matrix and solve the system, we get wildly varying results, the result of the ill-conditioning. And this happens even if the perturbations are .5 LSD or less; this would be within the measurement error of the given numbers.

But if we use the matrix which is .1 away from the original to represent the original, we can perturb the elements by .5 LSD and still get results that are reasonably close to the unperturbed solution. It is backward error analysis championed by Wilkinson that leads us to this technique. A lot of this is explained in the HP15 Advanced Functions handbook.

How easy is this to do on your calculator? Can you guess which manufacturer's calculators make it easy? No, it's not Radio Shack.