▼
Posts: 79
Threads: 9
Joined: Jan 1970
Would someone be interested in sharing some knowledge about the MarquateLevenberg method of optimization? Source for reading or source code for the 48gx or successor. I am probably among the first to write RPN for NewtonRaphson for the 48 and earler TI 59, etc., so I umderstand the subject reasonably well, just haven't studied the techniques involved.
I use mathcad version 13 in my desktop which uses the method imbedded in machine language. so does excel for that matter.
Anyway, I will appreciate any information you want to share. As far as I can tell from using it, the method is suited for simultaneous solutions of complex nonlinear models.
Ron
▼
Posts: 1,089
Threads: 32
Joined: Dec 2005
Here is everything you need:
http://www.library.cornell.edu/nr/bookcpdf.html
LM should be covered in 15.5 don't remember exactly.
Good luck,
Thomas
Posts: 1,368
Threads: 212
Joined: Dec 2006
Here is a routine I put together some years ago in Maple (an earlier version), with some inspiration from the NR routine and some other assistance and inspiration from a similar routine of a fellow named Dave Holmgren.
If you can "read" Maple you may find it informative. Seems to work, though it is slow. There is faster curve fitting software out there!
Les
LevMarq:= proc(f::name=algebraic,indvar::name,firstguess::list(name=numeric),xlist::list(numeric),ylist::list(numeric),tol)
> local x,params,reso,npar,npts,drvs,eqs,i,j, w, A, alpha, dy, beta, lambda,p,dynew,pnew,ssold,ssnew,niter,pnlist:
> w:=interface(warnlevel):interface(warnlevel=0):with(linalg):interface(warnlevel=w):
> reso:=10^10;
> x:=indvar;
> params:=map(lhs,firstguess);
> npar:=nops(params);
> npts:=nops(xlist);
> drvs:=grad(rhs(f),vector(params));
> A:=matrix(npts,npar,0);
> alpha:=matrix(npar,npar,0);
> dy:=vector(npts,0);
> dynew:=vector(npts,0);
> beta:=vector(npts,0);
> pnew:=vector(npar,0);
> lambda:=.001;
> p:=map(rhs,firstguess);
> eqs:=zip((s,t)>s=t,params,p);
>
> for niter from 1 to 200 do
> for i from 1 to npts do
> for j from 1 to npar do
> A[i,j]:=evalf(subs(eqs,subs(x=xlist[i],drvs[j])));
> od
> od;
> alpha:=evalm(transpose(A)&*A);
> for j from 1 to npar do alpha[j,j]:=(1+lambda)*alpha[j,j] od;
> for i from 1 to npts do
> dy[i]:=evalf(ylist[i]subs(eqs,subs(x=xlist[i],rhs(f))))
> od:
> beta:=evalm(transpose(A)&*dy);
> pnew:=evalm(vector(p)+linsolve(alpha,beta)):
> pnlist:=[seq(pnew[i],i=1..npar)];
> eqs:=zip((s,t)>s=t,params,pnlist);
> for i from 1 to npts do
> dynew[i]:=evalf(ylist[i]subs(eqs,subs(x=xlist[i],rhs(f))))
> od;
> ssold:=dotprod(dy,dy);
> ssnew:=dotprod(dynew,dynew);
> if ssold<=ssnew then
> lambda:=10*lambda;
> else
> lambda:=.1*lambda;
> p:=pnlist;
> fi;
> print(sprintf(`Iteration no. %a`,niter));
> print(`Old SSR`=ssold):
> print(`New SSR`=ssnew):
> eqs:=zip((s,t)>s=t,params,p);
> print(eqs);
> print(``);
> if abs(ssnewreso) <= tol then break fi;
> reso:=ssnew;
> od:
> lhs(f)=subs(eqs,rhs(f)):
> end:
▼
Posts: 210
Threads: 45
Joined: Jan 1970
Thanks, Les.
I went with MathCad which borrows much from Maple almost a subset, but it was the difficulty of using Maple at the time for me (DOS).
I think it was about Maple 5 that I found it too cumbersome for my needs.
Thank you again for the code. I can learn a lot just attempting to decipher the code in a structured program. This one looks almost more complex than the problems to be solved, but I know that a lot of the extra code is for general application.
Ron
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
I agree it looks complex but fortunately a CAS like Maple is well suited to coding the algorithm since one can use the built in calculus functions like grad(). Linear algebra and matrix things are built in and come naturallythings like matrix inversion, solution of a linear system, vector products. I am not much of a programmer, so I was easily bewildered at first by the NR C and Pascal code. Maple removes the burden of creating matrix and vector handling routines from scratch.
I look forward to getting a look a Namir's stuff once I can get the link to work :)
Les
Posts: 2,247
Threads: 200
Joined: Jun 2005
Ron,
If you got to my web site you can look at various implementations of the Marquardt algorithm (MATLAB, True BASIC, VB .Net, and Visual C# .Net).
Namir
Posts: 11
Threads: 2
Joined: Jan 1970
I've written a Mathematica program for it too; easy with that. Numerical recipes is good, I see it was linked.
A similar method is the Newton method. I don't know of any published LevenbergMarquart or Newton calculator methods, but one gentleman published HP41C and TI58/59 versions of a nonlinear regression using GaussNewton methods. See the following:
"Nonlinear regression on a pocket calculator", Brian W. Clare, Chemical Engineering, August 23, 1982
It may be of some help? It's a neat article in any case.
▼
Posts: 11
Threads: 2
Joined: Jan 1970
Incidentally this article is found in Volume II of "Calculator Programs for Chemical Engineers"
I may have found it at amazon.com
But I don't know if those sellers are selling volume I or II. You should ask first. Volume II is the yellow one.
http://www.amazon.com/exec/obidos/searchhandleurl/index=books&fieldtitleid=377541&vefield=none/qid=1151461944/sr=123/00264711006889649
Lots of HP calculator programs in these books!
Edited: 27 June 2006, 10:45 p.m.
Posts: 79
Threads: 9
Joined: Jan 1970
Thanks, Brian.
Yes, I am familiar with NewtonRaphson. So neat and compact, especially if it applies and you have a closed form for the first derivative. Not bad otherwise even.
I use my laptop and MathCad when I can. One optimizing routine that MathCad uses is labeled "GenFit." This curvefitting routine gives the user a choice of different methods, like a least squares, M.L.. etc. I have used a mini/max routine routine in Excel to handle an entire spreadsheet making the problem pretty complex.
Being naturally curios, I wanted to read up on the logic behind the method and answer questions like, "When it requires derivatives, etc., how does it use them.
I get some help in the field with the 48/49 calculators performing what would have seemed like miracles in the 60's, just curios I guess. There are so many functions and subroutines built in that we no longer need the fundamentals to produce results. Kind of sad in a way, when there are still a few of us able to calculate an accurate mortgage payment on a slide rule with adequate log scales.
Thanks again,
Ron
▼
Posts: 11
Threads: 2
Joined: Jan 1970
This is not NewtonRaphson. This is GaussNewton. NewtonRaphson is a rootfinding method. GaussNewton is, to quote wikipedia, a nonlinear least squares method. Marquardt method is a combination of steepest descent and GaussNewon methods.
▼
Posts: 79
Threads: 9
Joined: Jan 1970
I do appreciate your interest, Bennyboy6, but I don't believe that I showed any confusion about Newton. One of my responses was directed to the person who made a comment about it as an alternative. I tried to set the record straight for all those who came behind the first to respond. Guess I missed something, for I am very familiar with the very compact NewtonRaphson, usually applicable to a single equation as one of (as you pointed out) many root finders available to devices with few resources.
I wanted a description of M&L for personal reasons and more than one kind person has answered, some with coded subroutines. I appreciate the willingness for many to help without the selfserving side comments we tend to suffer from less courteous forums. As a very qualified professor and mentor reminded me more than once, "There are few foolish questions, but many foolish answers."
Ron
▼
Posts: 11
Threads: 2
Joined: Jan 1970
No problem Ron. If you live near a university library, check out the Optimization of Chemical Processes book I mentioned. It really does have a good explanation and algorithm, though the above mentioned Numerical Recipes is also a great reference.
▼
Posts: 79
Threads: 9
Joined: Jan 1970
Thanks again for all your assistance, Bennyboy. I use Mathcad version 13 which has internal solver routines internally planted for various functions including some curve fitting routines and differential equations. They seem to work well and I was wondering about the logic. You mentioned earlier that higher derivatives are used to employ steeper descent. The "GENFIT" function, for example offers choices to the user for M&L, or an alternative "Least Squares," raising my curiosity as to what advantages might I be passing up accepting one over the other.
From your responses it sounds like the more unknowns and or parameters involved and or the expectation of asymptopes in the vicinity of solutions, the more I should lean towards M&L as the choice. On the other hand, I should lean towards other alternatives for less complex solutions to save time.
Ron
Posts: 1,368
Threads: 212
Joined: Dec 2006
I would be very interested if you could share the Excel spreadsheet that does minimax fitting. The email given is a real one, since I quite foolishly like tempting the spambots. Besides, everyone in this forum is pretty harmless, it seems, and I like to make it easy to be reached :)
Les
▼
Posts: 79
Threads: 9
Joined: Jan 1970
This is to Les.
I'm not sure what "Spambot " translates to, but it sounds a little skeptical about Marquatd  Levenberg being contained within Excel, You have raised my own skepticism about some claims. That old spreadsheet is somewhere in yet to be unpacked moving boxes. What I recall about it was that there was a statement concerning ML as the engine. The problem was an assigned problem about minimizing student bus miles to and from school. Several criteria had to be met such as a range of mixture of minority students from some seven or eight neighborhoods. I also recall that the solver had to be loaded to get this problem solved. My memory is a little fuzzy on this, but my recollection tells me that the M&L routine was a part of the solver addin.
I have been recently reviewing old differential equations as they apply to business and economic models. A byproduct of this endeavor roused my curiosity about the descriptions of some of the ODE solvers in MathCad coupled with some other interpolating routines, "GenFit" among them. As with other uses of different solvers, my experience has been limited to what is imbedded in the solvers, not with source code or any discussion of the logic used by the method  that is what I was seeking. Several of you have been nice enough to provide references to study material, some have sent code. I appreciate all of that and when I find that bussing problem on spreadsheet I will post it for you.
As far as code goes, It is supposedly imbedded in MathCad, so I shouldn't need to know the code to use it.
Thanks again for all of you and thanks for baiting me,
Ron
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
I have been horribly misunderstood! I am so sorry!
By spambot I was referring to those mysterious automated processes throughout internetland that crawl about usenet groups and web fora in quest for anything looking like an email address to add to some database someplace to be given or sold to spammers. Nothing to do with you at all. No baiting intended. My goodness, I am so sorry to offend when absolutely nothing was intended of the sort.
I have actually come up with a basic approach to approximate minimax fitting in Excelbasically, one of the columns in the spreadsheet lists either the absolute value of the absolute or fractional errors. The target cell for Solver contains the formula "=MAX(xx:yy)" where xx:yy is the range of cells containing such errors. The goal of Solver is to minimize said target cell by changing the desired parameters as indicated in the worksheet. I find that unless one has a pretty good initial guess, this doesn't seem to be the best way to replicate the established minimax fits in some of the numerical analysis books I have been using. I think the Solver gets stuck in a local minimum somewhere in hyperspace, and settles on that one as the best solution, even if a better one exists elsewhere.
Another thing I have learned the hard way is to be sure that the "gold standard" comparator function has adequate precision. The trig and logarithmic functions in Excel seem to give good 15 digit precision, at least when I compare them with Maple for a goodly selection of values. However, for example, functions of recent interest to me such as the error function, complementary error function, and the standard cumulative normal distribution function in some places give no better than 7 or 8 digits, leading me to believe they internally rely on approximations less precise than what I am trying to use them to derive! Many thanks to Rodger Rosenbaum for advising me to import more reliable function values from something like SPSS TableCurve, Maple, or Mathematica.
I have no scepticism whatsover about whether LM is used for nonlinear least squares fits in Excel. I have no scepticism because I really don't know what the innards of Excel's Solver really do.
Thanks for the stimulating initial post. I will keep plugging away with my own explorations.
Les
▼
Posts: 79
Threads: 9
Joined: Jan 1970
To Les,
PLEASE, no apologies! I have been overly sensitive. There remains no doubt from my viewpoint that you are more than qualified to dig around in these matters  in great detail. I, on the other hand, am limited to the viewpoint of user of these many features that have been programmed by the pioneers.
Again, from a user's perspective, it appears to me that L_M serves as a good assistant at the chalk board when more than a couple of unknown variables are involved along with non linearity. The caveat is not to walk away from the modeling or solving which wander around the best fit at times. The original computers (humans) are still required to exercise their judgment. We, at least I, have come full circle where I should know more about the sensitive aspects of the method.
I noticed the time stamped on your posting. I guess you must be in the U.K. or a night owl like me. You have been a great help to me and by the range of topics your name appears on, I would have to say you help many of us.
Thank you,
Ron
Posts: 11
Threads: 2
Joined: Jan 1970
Also, there is a good algorithm discussion for Marquardt's method in "Optimization of Chemical Processes" by Edgar and Himmelblau, McGraw Hill, 1988 on pages 214219.
This book is on amazon at:
http://www.amazon.com/gp/product/0071189777/ref=sr_11_1/00264711006889649?ie=UTF8
though mine is an older edition so page numbers are likely different.
