Marquadt-Levenberg method.


Would someone be interested in sharing some knowledge about the Marquate-Levenberg method of optimization? Source for reading or source code for the 48gx or successor. I am probably among the first to write RPN for Newton-Raphson for the 48 and earler TI 59, etc., so I umderstand the subject reasonably well, just haven't studied the techniques involved.

I use mathcad version 13 in my desktop which uses the method imbedded in machine language. so does excel for that matter.

Anyway, I will appreciate any information you want to share. As far as I can tell from using it, the method is suited for simultaneous solutions of complex non-linear models.



Here is everything you need:

LM should be covered in 15.5 don't remember exactly.

Good luck,



Here is a routine I put together some years ago in Maple (an earlier version), with some inspiration from the NR routine and some other assistance and inspiration from a similar routine of a fellow named Dave Holmgren.

If you can "read" Maple you may find it informative. Seems to work, though it is slow. There is faster curve fitting software out there!


LevMarq:= proc(f::name=algebraic,indvar::name,firstguess::list(name=numeric),xlist::list(numeric),ylist::list(numeric),tol)
> local x,params,reso,npar,npts,drvs,eqs,i,j, w, A, alpha, dy, beta, lambda,p,dynew,pnew,ssold,ssnew,niter,pnlist:
> w:=interface(warnlevel):interface(warnlevel=0):with(linalg):interface(warnlevel=w):
> reso:=10^10;
> x:=indvar;
> params:=map(lhs,firstguess);
> npar:=nops(params);
> npts:=nops(xlist);
> drvs:=grad(rhs(f),vector(params));
> A:=matrix(npts,npar,0);
> alpha:=matrix(npar,npar,0);
> dy:=vector(npts,0);
> dynew:=vector(npts,0);
> beta:=vector(npts,0);
> pnew:=vector(npar,0);
> lambda:=.001;
> p:=map(rhs,firstguess);
> eqs:=zip((s,t)->s=t,params,p);
> for niter from 1 to 200 do
> for i from 1 to npts do
> for j from 1 to npar do
> A[i,j]:=evalf(subs(eqs,subs(x=xlist[i],drvs[j])));
> od
> od;
> alpha:=evalm(transpose(A)&*A);
> for j from 1 to npar do alpha[j,j]:=(1+lambda)*alpha[j,j] od;
> for i from 1 to npts do
> dy[i]:=evalf(ylist[i]-subs(eqs,subs(x=xlist[i],rhs(f))))
> od:
> beta:=evalm(transpose(A)&*dy);
> pnew:=evalm(vector(p)+linsolve(alpha,beta)):
> pnlist:=[seq(pnew[i],i=1..npar)];
> eqs:=zip((s,t)->s=t,params,pnlist);
> for i from 1 to npts do
> dynew[i]:=evalf(ylist[i]-subs(eqs,subs(x=xlist[i],rhs(f))))
> od;
> ssold:=dotprod(dy,dy);
> ssnew:=dotprod(dynew,dynew);
> if ssold<=ssnew then
> lambda:=10*lambda;
> else
> lambda:=.1*lambda;
> p:=pnlist;
> fi;
> print(sprintf(`Iteration no. %a`,niter));
> print(`Old SSR`=ssold):
> print(`New SSR`=ssnew):
> eqs:=zip((s,t)->s=t,params,p);
> print(eqs);
> print(``);
> if abs(ssnew-reso) <= tol then break fi;
> reso:=ssnew;
> od:
> lhs(f)=subs(eqs,rhs(f)):
> end:


Thanks, Les.

I went with MathCad which borrows much from Maple almost a subset, but it was the difficulty of using Maple at the time for me (DOS).
I think it was about Maple 5 that I found it too cumbersome for my needs.

Thank you again for the code. I can learn a lot just attempting to decipher the code in a structured program. This one looks almost more complex than the problems to be solved, but I know that a lot of the extra code is for general application.



I agree it looks complex but fortunately a CAS like Maple is well suited to coding the algorithm since one can use the built in calculus functions like grad(). Linear algebra and matrix things are built in and come naturally--things like matrix inversion, solution of a linear system, vector products. I am not much of a programmer, so I was easily bewildered at first by the NR C and Pascal code. Maple removes the burden of creating matrix and vector handling routines from scratch.

I look forward to getting a look a Namir's stuff once I can get the link to work :)




If you got to my web site you can look at various implementations of the Marquardt algorithm (MATLAB, True BASIC, VB .Net, and Visual C# .Net).



I've written a Mathematica program for it too; easy with that. Numerical recipes is good, I see it was linked.

A similar method is the Newton method. I don't know of any published Levenberg-Marquart or Newton calculator methods, but one gentleman published HP-41C and TI-58/59 versions of a nonlinear regression using Gauss-Newton methods. See the following:

"Nonlinear regression on a pocket calculator", Brian W. Clare, Chemical Engineering, August 23, 1982

It may be of some help? It's a neat article in any case.


Incidentally this article is found in Volume II of "Calculator Programs for Chemical Engineers"

I may have found it at

But I don't know if those sellers are selling volume I or II. You should ask first. Volume II is the yellow one.

Lots of HP calculator programs in these books!

Edited: 27 June 2006, 10:45 p.m.


Thanks, Brian.

Yes, I am familiar with Newton-Raphson. So neat and compact, especially if it applies and you have a closed form for the first derivative. Not bad otherwise even.

I use my laptop and MathCad when I can. One optimizing routine that MathCad uses is labeled "GenFit." This curve-fitting routine gives the user a choice of different methods, like a least squares, M.L.. etc. I have used a mini/max routine routine in Excel to handle an entire spreadsheet making the problem pretty complex.

Being naturally curios, I wanted to read up on the logic behind the method and answer questions like, "When it requires derivatives, etc., how does it use them.

I get some help in the field with the 48/49 calculators performing what would have seemed like miracles in the 60's, just curios I guess. There are so many functions and subroutines built in that we no longer need the fundamentals to produce results. Kind of sad in a way, when there are still a few of us able to calculate an accurate mortgage payment on a slide rule with adequate log scales.

Thanks again,



This is not Newton-Raphson. This is Gauss-Newton. Newton-Raphson is a root-finding method. Gauss-Newton is, to quote wikipedia, a nonlinear least squares method. Marquardt method is a combination of steepest descent and Gauss-Newon methods.


I do appreciate your interest, Bennyboy6, but I don't believe that I showed any confusion about Newton. One of my responses was directed to the person who made a comment about it as an alternative. I tried to set the record straight for all those who came behind the first to respond. Guess I missed something, for I am very familiar with the very compact Newton-Raphson, usually applicable to a single equation as one of (as you pointed out) many root finders available to devices with few resources.

I wanted a description of M&L for personal reasons and more than one kind person has answered, some with coded subroutines. I appreciate the willingness for many to help without the self-serving side comments we tend to suffer from less courteous forums. As a very qualified professor and mentor reminded me more than once, "There are few foolish questions, but many foolish answers."



No problem Ron. If you live near a university library, check out the Optimization of Chemical Processes book I mentioned. It really does have a good explanation and algorithm, though the above mentioned Numerical Recipes is also a great reference.


Thanks again for all your assistance, Bennyboy. I use Mathcad version 13 which has internal solver routines internally planted for various functions including some curve fitting routines and differential equations. They seem to work well and I was wondering about the logic. You mentioned earlier that higher derivatives are used to employ steeper descent. The "GENFIT" function, for example offers choices to the user for M&L, or an alternative "Least Squares," raising my curiosity as to what advantages might I be passing up accepting one over the other.

From your responses it sounds like the more unknowns and or parameters involved and or the expectation of asymptopes in the vicinity of solutions, the more I should lean towards M&L as the choice. On the other hand, I should lean towards other alternatives for less complex solutions to save time.



I would be very interested if you could share the Excel spreadsheet that does minimax fitting. The email given is a real one, since I quite foolishly like tempting the spambots. Besides, everyone in this forum is pretty harmless, it seems, and I like to make it easy to be reached :)



This is to Les.

I'm not sure what "Spambot " translates to, but it sounds a little skeptical about Marquatd - Levenberg being contained within Excel, You have raised my own skepticism about some claims. That old spreadsheet is somewhere in yet to be unpacked moving boxes. What I recall about it was that there was a statement concerning M-L as the engine. The problem was an assigned problem about minimizing student bus miles to and from school. Several criteria had to be met such as a range of mixture of minority students from some seven or eight neighborhoods. I also recall that the solver had to be loaded to get this problem solved. My memory is a little fuzzy on this, but my recollection tells me that the M&L routine was a part of the solver add-in.

I have been recently reviewing old differential equations as they apply to business and economic models. A by-product of this endeavor roused my curiosity about the descriptions of some of the ODE solvers in MathCad coupled with some other interpolating routines, "GenFit" among them. As with other uses of different solvers, my experience has been limited to what is imbedded in the solvers, not with source code or any discussion of the logic used by the method - that is what I was seeking. Several of you have been nice enough to provide references to study material, some have sent code. I appreciate all of that and when I find that bussing problem on spreadsheet I will post it for you.

As far as code goes, It is supposedly imbedded in MathCad, so I shouldn't need to know the code to use it.

Thanks again for all of you and thanks for baiting me,



I have been horribly misunderstood! I am so sorry!

By spambot I was referring to those mysterious automated processes throughout internet-land that crawl about usenet groups and web fora in quest for anything looking like an email address to add to some database someplace to be given or sold to spammers. Nothing to do with you at all. No baiting intended. My goodness, I am so sorry to offend when absolutely nothing was intended of the sort.

I have actually come up with a basic approach to approximate minimax fitting in Excel--basically, one of the columns in the spreadsheet lists either the absolute value of the absolute or fractional errors. The target cell for Solver contains the formula "=MAX(xx:yy)" where xx:yy is the range of cells containing such errors. The goal of Solver is to minimize said target cell by changing the desired parameters as indicated in the worksheet. I find that unless one has a pretty good initial guess, this doesn't seem to be the best way to replicate the established minimax fits in some of the numerical analysis books I have been using. I think the Solver gets stuck in a local minimum somewhere in hyperspace, and settles on that one as the best solution, even if a better one exists elsewhere.

Another thing I have learned the hard way is to be sure that the "gold standard" comparator function has adequate precision. The trig and logarithmic functions in Excel seem to give good 15 digit precision, at least when I compare them with Maple for a goodly selection of values. However, for example, functions of recent interest to me such as the error function, complementary error function, and the standard cumulative normal distribution function in some places give no better than 7 or 8 digits, leading me to believe they internally rely on approximations less precise than what I am trying to use them to derive! Many thanks to Rodger Rosenbaum for advising me to import more reliable function values from something like SPSS TableCurve, Maple, or Mathematica.

I have no scepticism whatsover about whether LM is used for nonlinear least squares fits in Excel. I have no scepticism because I really don't know what the innards of Excel's Solver really do.

Thanks for the stimulating initial post. I will keep plugging away with my own explorations.



To Les,

PLEASE, no apologies! I have been overly sensitive. There remains no doubt from my viewpoint that you are more than qualified to dig around in these matters - in great detail. I, on the other hand, am limited to the viewpoint of user of these many features that have been programmed by the pioneers.

Again, from a user's perspective, it appears to me that L_M serves as a good assistant at the chalk board when more than a couple of unknown variables are involved along with non linearity. The caveat is not to walk away from the modeling or solving which wander around the best fit at times. The original computers (humans) are still required to exercise their judgment. We, at least I, have come full circle where I should know more about the sensitive aspects of the method.

I noticed the time stamped on your posting. I guess you must be in the U.K. or a night owl like me. You have been a great help to me and by the range of topics your name appears on, I would have to say you help many of us.

Thank you,



Also, there is a good algorithm discussion for Marquardt's method in "Optimization of Chemical Processes" by Edgar and Himmelblau, McGraw Hill, 1988 on pages 214-219.

This book is on amazon at:

though mine is an older edition so page numbers are likely different.

Possibly Related Threads...
Thread Author Replies Views Last Post
  A fast Bernoulli Number method for the HP Prime Namir 16 3,608 11-22-2013, 04:46 PM
Last Post: Namir
  A brand new calculator benchmark: "middle square method seed test" Pier Aiello 25 4,807 09-13-2013, 01:58 PM
Last Post: Pier Aiello
  Downhill Simplex Method (Nelder Mead) for WP-34S Marcel Samek 0 626 07-07-2013, 03:05 AM
Last Post: Marcel Samek
  New Quadratic Lagrangian Interpolation Method Namir 2 920 07-20-2012, 04:32 PM
Last Post: Namir
  New variant for the Romberg Integration Method Namir 8 1,908 04-18-2012, 07:47 AM
Last Post: Nick_S
  question for lyuka and the Ostrowski method - bumped! hugh steers 7 1,736 08-27-2011, 11:47 AM
Last Post: Namir
  question for lyuka and the Ostrowski method hugh steers 14 2,575 08-22-2011, 04:40 PM
Last Post: Lyuka
  Lyuka and the Ostrowski method's for Root Seeking Namir 1 670 08-20-2011, 11:03 AM
Last Post: Lyuka
  An Attempt to Challenge the Bisection method for root findin Namir 6 1,535 08-14-2011, 05:57 PM
Last Post: Paul Dale
  Further to an earlier posting, Method to the madness! Geoff Quickfall 6 1,321 03-16-2010, 09:38 PM
Last Post: Geoff Quickfall

Forum Jump: