Using the SVD to solve linear systems



#2

In this post, I'm going to show how to use the SVD to solve ill-conditioned systems. I'll be using the HP49G+ to do the computations. I have a special directory for trying out various SVD related schemes. Valentin's original ill-conditioned system will be solved. I created several variables in this directory, namely AA, BB, A, B, U, V and S. I use AA and BB to hold backups of Valentin's A and B matrices. The A and B variables hold the linear system under investigation and U, V and S hold the SVD of the A matrix. Sometimes A and B will hold AA and BB with small changes, so having AA and BB save one from having to type in Valentin's system again.

Some programs used are as follows (sometimes I leave results on the stack, sometimes not, with no particular organizational scheme in mind!):

->SV (this program computes the Singular Value Decomposition of a matrix on level 1 of the stack, and leaves the results, U V S, on the stack)

<< SVD >>

SV-> (this program expects the same objects on the stack that SVD would leave there, namely U V S. It then makes a suitable diagonal matrix from S and multiplies the 3 objects, re-creating the matrix whose SVD was in U, V and S.)

<< 3 PICK SIZE OBJ-> DROP2 3 PICK SIZE OBJ-> DROP2 2 ->LIST DIAG-> SWAP * * >>

PSU (The product TRANSPOSE(V) * DIAG(1/s) * TRANSPOSE(U) is computed. This program expects an integer N on the stack. It then recalls U, V, and S from the variables and computes a pseudoinverse for the matrix whose SVD is stored in those variables. This pseudoinverse can be rank reduced by means of the integer initially on the stack when PSU is invoked. What it does is to zero N of the reciprocals of the singular values starting with the smallest and working toward the largest. If the matrix SVD stored in the variables is of a square matrix, and if N is 0, then the ordinary inverse is returned.)

<< -> N << U TRN V TRN S OBJ-> OBJ-> DROP ->LIST 1E-499 ADD INV OBJ-> 1 SWAP 2 ->LIST

->ARRY DUP SIZE OBJ-> DROP N - 2 ->LIST { 1 1 } SWAP SUB 3 PICK SIZE OBJ-> DROP2 3

PICK SIZE OBJ-> DROP2 SWAP 2 ->LIST DIAG-> ROT * * >>

Cond (this program computes the condition number of a matrix on the stack as the ratio of the largest to the smallest singular values. Be sure to spell the name with a capital "C" and lower case "ond" so as not to conflict with the built-in COND command.)

<< SVL DUP 1 GET SWAP DUP SIZE 1 GET GET / >>

MADJ (This program mean adjusts the columns of a matrix on the stack. It computes the mean of each column and subtracts that value from all the elements of the corresponding column.)

<< DUP TRN DUP SIZE OBJ-> DROP -> c r << r 1 2 ->LIST 1 CON * r / c DIAG-> c r 2 ->LIST

1 CON * TRN - >> >>

CORM (This program computes the column-wise correlation matrix of a matrix on the stack. It uses MADJ. It only works right for a square matrix, but it doesn't check for this.)

<< MADJ DUP TRN SWAP * DUP ->DIAG OBJ-> OBJ-> DROP ->LIST SQRT INV OBJ-> 1 ->LIST ->ARRY

DUP SIZE OBJ-> DROP DIAG-> SWAP OVER * * >>

Three other programs I gave in another post, IREF, SRREF and TST0 should be in the directory, too.

First, type Valentin's system into AA and BB; the 7 x 7 design matrix into AA and the 7 x 1 column matrix into BB; then recall AA and BB and store them into the A and B variables. Recall A and press Cond and see 3.17E12; this is a very ill-conditioned system, and the usual rule of thumb says that you will lose about LOG10(3.17E12) ~= 12 digits in your solution. Fortunately, the HP49G+ uses 15 digits internally to solve the system with the B/A method, so we actually get a reasonable solution Clear the stack.

Type A ->SV and when it's done (takes a while), use the blue left arrow key as a short-cut for store, and store the 3 objects on the stack into the variables S, V, and U in that order. Type U V S SV-> and see the re-constituted matrix A. Recall A and type -. You can see some tiny residual errors from all the number crunching that has occurred. Clear the stack.

Now type 0 PSU. You will see the inverse of A on the stack. (this assumes that the SVD of A is stored in U, V, and S. If not, then you will see the (pseudo) inverse of whatever matrix had its SVD stored in those variables.) Then recall B to the stack and press *. You have just calculated INVERSE(A) * B, which is a traditional way to solve the system. You won't get the same solution as if you calculate the inverse of A with the 1/x key because the A matrix is so ill-conditioned that numerical errors are becoming overwhelming, and cause differences in the two methods of computing the inverse. Recall the variable S and look at the last element of that vector (the smallest singular value of the matrix A); it is 1.257E-11. This is the distance to the nearest singular matrix; the small size of this number is an indication of severe ill-conditioning. Clear the stack.

When PSU computes the reciprocals of the singular values to create DIAG(1/s), the last one is very large, namely, 7.955E10. This is many orders of magnitude larger than any of the other singular value reciprocals, and its size is the reason the x = INVERSE(A) * B method has difficulties. If the solution vector x computed by this method is used with a perturbed A matrix to compute A * x = B' and the residual (B'-B) is computed, it can be shown that the perturbations can be magnified by a factor about equal to that very large last element in DIAG(1/s).

We can eliminate this difficulty by putting the number 1 on the stack and then executing PSU; then recall B and press *; this replaces that very large reciprocal of the last singular value by zero. We now have a solution for the system that has avoided the ill-conditioning of the nearness of the matrix A to a singular matrix. This solution has avoided the magnification of small perturbations (errors) in the A matrix at the expense of the size of the residual. If the solution is computed with the B/A method, the residual is 0. As we compute successive solutions with N PSU B * (N starting at 0 and increasing 1 at at time), the error magnification decreases because the largest singular value reciprocal in DIAG(1/s) is smaller, but the residual size increases. The residual size obtained with 1 PSU B * B - ABS is 2E-9. We can go further; compute 2 PSU B * B - ABS, dropping the last two singular value reciprocals from the solution, and we get a residual size of 7.6E-2. We can go further still, but the size of the residual should not be much larger than the expected error in the elements of the A matrix. Since the A matrix was a series of measurements with only two significant digits of accuracy, the error in each element should be taken to be +-.05 (1/2 LSD), so the solution given by 2 PSU B * is probably acceptable.

To see the power of the SVD method of solution, modify the A matrix (in the variable A) so that column 7 is identical to column 3. Now the condition number of the system is infinite and it cannot be solved by the B/A method, or the INVERSE(A)*B method, no matter how many significant digits your calculator or computer has. But type A ->SV and then store the U V S objects off the stack into the variables U, V and S. Now type 1 PSU B * and see a solution with a residual size of about 3.7E-9, a quite respectable result.

Try modifying the A and B variables as I suggested in earlier posts and experiment with solutions you get with the SVD method vs. the traditional methods.

You can use the CORM command to see which of the columns of a matrix are highly correlated. If you want to see how well the rows are correlated, transpose the matrix first. Recall the AA matrix and transpose it; then type CORM. You will see that the {7 3} element of the correlation matrix is .999035, indicating that rows 3 and 7 are closely correlated. This is a clue as to how Valentin created this matrix. Normally, a random matrix won't have such a high correlation between any 2 rows. Test this by executing {7 7} RANM CORM repeatedly and examining the off-diagonal elements; you won't see .999 often.


#3

Rodger,

Thank you so much for the two post. I am one of those folks who is familiar with SVD but not too much. I have translated SVD implementation from C++ to Visual BASIC and C#, but the heart of teh algorithms is more complex than that of LU.

It's going to take me a while to look at the two posts. If you have more on SVD or QR, then please please please share them. I am very interested. All kinds of source code is welcome, be it RPL or a higher-level programmng langauges.

Again, thank you so much for taking the time for these two wonderful posts. I value sharing algorithms (this site and other sites). I welcome you to look at some of the algorithms I posted on my web site.

Namir


#4

Namir said:



Thank you so much for the two post. I am one of those folks who is familiar with SVD but not too much. I have translated SVD implementation from C++ to Visual BASIC and C#, but the heart of teh algorithms is more complex than that of LU.

It's going to take me a while to look at the two posts. If you have more on SVD or QR, then please please please share them. I am very interested. All kinds of source code is welcome, be it RPL or a higher-level programmng langauges.

There is a very good book with a lot of numerical analysis routines, "Numerical Recipes in Fortran". There is also a version with the programs in C, but I find the Fortran routines easier to translate into Basic. I translated that book's code for the SVD into Basic on the HP71 years ago, but the HP48G and HP49G+ are so handy to use for playing around that I use them nowadays.

Another very good place to get numerical analysis code is:

http://www.netlib.org/liblist.html

Look at the lapack and eispack libraries.

I've looked at your web site before, and I suspect that because of your obvious interest in numerical analysis, you probably already know about the references I just mentioned, but others may not know yet, especially about the netlib site.

Again, thank you so much for taking the time for these two wonderful posts. I value sharing algorithms (this site and other sites). I welcome you to look at some of the algorithms I posted on my web site.

You're welcome.


#5

I am a big fan of Numerical Recipes. In fact, if I am allowed to keep just one Numerical Analysis book on my shelves (and I do cleect them) I keep the Numerical Recipes.

I have seen the other web site.

I am a fan of numerical analysis. I like iterative methods that lend themsevles to modifications and improvement.

Thanks for the references!

Namir


#6

Hi Rodger,

Can you please relist the programs REF, SRREF and TST0 or give me a link to them? I'd like to include them in the article so it will be self sufficicent.

Thanks,

Namir

Edited: 2 Apr 2006, 8:51 a.m.


Possibly Related Threads...
Thread Author Replies Views Last Post
  HP Prime: SVD JeffG 2 426 12-14-2013, 12:37 PM
Last Post: Helge Gabert
  Hook-µP software by Rush Systems Lute Kamstra 5 606 11-29-2013, 01:30 AM
Last Post: Lute Kamstra
  HP prime: linear solver app Alberto Candel 1 352 11-21-2013, 01:57 AM
Last Post: Michael Carey
  HP Prime: Linear Solver app bug BruceH 0 235 11-15-2013, 06:36 PM
Last Post: BruceH
  XCas / Prime "solve" question Nigel J Dowrick 4 531 11-08-2013, 04:01 AM
Last Post: Nigel J Dowrick
  Using the Prime to solve for eigenvalues Michael de Estrada 28 2,202 10-27-2013, 07:21 AM
Last Post: Tarcisi C
  HP Prime Define: Use with solve, etc. Helge Gabert 0 241 10-23-2013, 06:24 PM
Last Post: Helge Gabert
  HP 34s solve f'(x)=0 Richard Berler 8 582 10-07-2013, 03:03 PM
Last Post: Dieter
  Good puzzle for kids to solve on 35s? snaggs 11 916 09-18-2013, 10:40 PM
Last Post: David Hayden
  September SOLVE Newsletter Robert Prosperi 4 465 09-07-2013, 01:12 AM
Last Post: Mic

Forum Jump: