▼
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi all,
Just came back from (too short) summer holidays, and I've
uploaded a new article (in PDF format) to my web site, namely:
HP-71B: Math ROM Baker's Dozen (Vol. 2)
8-page article, second part of a two-part article, featuring 13 assorted mini-topics
discussing novel, unusual, or otherwise interesting aspects of using the extremely
powerful and versatile Math plug-in ROM for the HP-71B.
This second part features the following advanced mini-topics:
- 8. Tigth fitting
Fitting a Nth-degree polynomial to a set of arbitrary data points.
- 9. What about fitting complex data ?
Same, for a set of complex data points.
- 10. Multiple integrals made easy
Double integrals, triple integrals and beyond.
- 11. And systems of simultaneous non-linear equations too !
Ditto.
- 12. Synergic eigenvalues
Computing all eigenvalues, real and/or complex, of arbitrary square matrices, as well as their characteristic polynomial.
- 13. Enter the MOB (Matrix Operations Benchmarks)
Timings for SYS, INV, +, -, *, TRN*, TRN, ZER, =(1), =(1,1), FNORM, CNORM, RNORM, and DET.
You can also find there the first part of this article (10 pages), as well as ten other HP-related articles by me, all in PDF format.
Best regards from V.
▼
Posts: 614
Threads: 66
Joined: Jul 2006
Hi Valentin,
Welcome back. Hope you had a great vacation.
Although I don't have the math rom, I am looking forward to reading your article and learning a little about how powerful a module it is.
Just to let you know, I finally put all your S&SMC in one PDF. The notice has been put in the Archives, so you may have missed it. Following is link:
All 11 SSMC's in One PDF File
Bill
▼
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, Bill:
Bill posted:
"Welcome back. Hope you had a great vacation."
Thank you very much, same to you. In my case it wasn't that great but never mind.
"Although I don't have the math rom, I am looking forward to reading your article and learning a little about how powerful a module it is."
Incredibly powerful, and I mean it. As for not having the ROM, I insist once more that you (and everyone else wanting to try) can google for and download the superb, absolutely-free Emu71 emulator, which does include the Math ROM as well.
That would allow you to enter the examples and routines from my articles and S&SMCs, and run them at 20x-70x the speed, with the convenience of large-window output for results instead of a 1-line display, not the mention a full size keyboard.
That's one of the main reasons I write routines specifically for the HP-71B: anyone can try them out for free, and they're very easy to write, test and debug. That can't be said for most other HP models, even if a Windows emulator/simulator does exist, mostly because
of keyboard/menus emulation.
"Just to let you know, I finally put all your S&SMC in one PDF. The notice has been put in the Archives, so you may have missed it."
Why, thank you very much ! :-) I'm very glad that such kind persons like yourself find my humble attempts to enliven the forum from time to time worthwhile enough to dedicate their precious time to gather them all in a single, convenient PDF file. This actually encourages me to go on with new installments.
Also, your magnificent work is quite useful to me, because I didn't collect most of those postings at their time, so they were lost to me (which explains some problems with the numbering, etc).
Again, thank you very much, I certainly hope you'll enjoy my next S&SMC coming soon, as well as more articles I'll upload in the near future.
Best regards from V.
▼
Posts: 2,761
Threads: 100
Joined: Jul 2005
Quote:
As for not having the ROM, I insist once more that you (and everyone else wanting to try) can google for and download the superb, absolutely-free Emu71 emulator, which does include the Math ROM as well.
Hi, Valentin!
Thanks for the tip. I found it at www.hpcalc.org. For an unknown reason, I cannot access J-F Garnier's website, BTW I cannot access any site in the membres.lycos.fr domain. Too bad because I lost my copies of your superb "Long Live... " series articles.
Best regards,
Gerson.
▼
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, Gerson:
Gerson posted:
"For an unknown reason, I cannot access J-F Garnier's website, BTW I cannot access any site in the membres.lycos.fr domain."
I just tried and no problems here ...
"Too bad because I lost my copies of your superb "Long Live... " series articles."
Contact me via this forum and I'll send you all of them as
zipped e-mail attachments. Thanks for your interest and kind words and
Best regards from V.
▼
Posts: 2,761
Threads: 100
Joined: Jul 2005
Quote:
Contact me via this forum and I'll send you all of them as zipped e-mail attachments.
Thanks, Valentin!
Most likely the trouble in accessing Lycos is with some firewall settings, both at work and at home. I would appreciate if you sent me the articles: just remove NSPM from my email and replace AT and DOT with '@' and '.', of course. I'll make a backup and a hardcopy this time!
Best regards,
Gerson.
Posts: 412
Threads: 40
Joined: Mar 2006
Hi, if you have trouble with Lycos, you can also get the Emu71 package from the hpcalc.org archive site:
http://www.hpcalc.org/hp48/pc/emulators/emu71.zip
J-F
▼
Posts: 2,761
Threads: 100
Joined: Jul 2005
Merci beaucoup, J-F!
J'avait déjà le trouvé là. Très bon travail! Peut-être une raison pour achéter un LX-200 parce que les 71B sont trop cher!
(Thank you very much. I had alread found it there. Very good job! Maybe a reason for purchasing a LX-200 as the 71B's are too expensive! - Sorry for the bad middle-school French :-)
Best regards,
Gerson.
Posts: 901
Threads: 113
Joined: Jun 2007
The HP-15C Nth Degree Polynomial Fitting Program and the HP-71B Tight Fitting Progam both fit an nth degree to polynomial to n + 1 data points. Many references suggest that this is a risky procedure; for example, Glen Kilpatrick's Polynomial Regression Program for the HP-42S which is in the Museum's software library includes the following cautionary note:
"... This program will yield a polynomial curve fit of arbitrary power. But this requires more discussion. It should be obvious that a linear regression requires a minimum of two points, a quadratic three, a cubic four, etc.
If that is the case, then why not always pick a power that is one less than the number of points. The curve generated will pass through every point in the set (note that I gloss over multivalued sets, a particular X must yield one and only one Y), and the fit will be "perfect". The problem with this is that inherent measurement noise will be magnified, the curve will gyrate wildly outside the points in the set, the "energy" will be nowhere near minimized, and nothing will have been made simple. ..."
So the obvious questions are:
When is it safe to use just one more data point than the degree of the polynomial to be fitted?
How does one tell?
▼
Posts: 776
Threads: 25
Joined: Jun 2007
"When is it safe to use just one more data point than the degree of the polynomial to be fitted?"
I'd say only when (and if you are sure) that you had a priori expectation that the polynomial degree was appropriate for this particular set of data.
For example, if you expected (or even knew by other means) that a linear relation was expected between two variables, then performaing a linear fit would be the proper thing to do, even if you had only three data pairs. You should also include, however, an estimate of the likely errors in your parameters, which requires of course some idea of the errors attached to each data point. Check any text on statistics for how to do this! (It's been a long time since I worried about such things!)
Posts: 1,755
Threads: 112
Joined: Jan 2005
Hi, Palmer:
Palmer posted:
"Many references suggest that this is a risky procedure"
It may be a risky procedure for a number of reasons. Some depend
on the kind of fitting you're up to (i.e., exact, mathematical
data or experimental data), others depend on the size of the
problem and the computing precision being used. Both can limit
the accuracy and/or usefulness of the polynomial fit you get.
For high-degree exact polynomial fits to a set of data points,
you're solving a large system of linear equations. The related
matrix approaches a Hilbert matrix as the degree increases, and
as has been mentioned a number of times (and you can see a
thorough discussion in my article "Mean Matrices", published
in Datafile V24N4 (July/August 2005)), Hilbert matrices are
extremely difficult to handle numerically, as they're almost
singular. Thus, exactly fitting an Nth-degree polynomial to N+1
data points involves numerically dealing with a close approximation
to an NxN Hilbert matrix, and for large N this will quickly
degrade precision to the point of the result being unusable.
"The curve generated will pass
through every point in the set [...] and the fit will
be "perfect". The problem with this is that inherent measurement noise will be magnified, the curve will gyrate wildly outside the
points in the set, the "energy" will be nowhere near minimized, and nothing will have been made simple. ..."
That's not so simple. It depends on the nature of the problem, i.e.. if we're dealing with approximate, empirical data obtained via
some measurement or experiment, or else it's just a mathematical
problem of fitting some polynomial to a given data
set or mathematical function (say y=exp(x)) over some range.
For this last case, approximating a polynomial to some given
function, it further depends on the function being approximated
AND the data points used in the approximation, i.e., equally-spaced
or arbitrarily spaced.
For suitable functions (say y=sin(x)) and equally spaced
data points, convergence does occur, and the polynomial thus
fitted closely resembles the original function as the degree
(i.e., number of points used) increases. But in the general
case and for many transcendental functions, this convergences
does not hold and the resulting polynomial, though passing
through all data points used, oscillates wildly, and absolutely
diverges from the function at all points except the fit set.
"When is it safe to use just one more data point than the degree of the polynomial to be fitted? How does one tell? "
As stated, it's a difficult theoretical question. As a rule of thumb, I would advice the following:
- If the data set are experimental and there are a large number of them, forget about an exact fit. Your best bet is to compute a number of polynomial least-squares regressions for degrees 1, 2, 3, ... and for each degree, compute the RMS of the error. As long as the RMS is decreasing sharply, keep on increasing the degree. When the RMS stops significantly changing, then halt.
For instance, if for degrees 1 to 5 you get RMS errors 0.532, 0.183, 0.027, 0.019, 0.015, you should stick to degree 3 (RMS = 0.027). The 'extra' accuracy you seem to get with degrees 4 and 5 is just experimental noise in the data. Least-squares approximations are essentially a smoothing of data, which will remove most of the noise. Increasing the degree is just a senseless attempt to also fit the noise as well.
There's also the fact that, as for the case of exact polynomial fit, the related matrix for least-squares regression is also nearly singular for large degree, but this can be avoided by using orthogonal polynomials instead of directly dealing with said matrix. However, for experimental data, high-degree least-square regressions are seldom justifiable, stick to the diminishing RMS criterium to decide where to stop. The computed polynomial will result in improved values bettering even the data points themselves, as most experimental noise will be smoothed out by the regression.
- If the data set is a mathematical set of data points or you're trying to fit a suitable polynomial to a mathematical function (say the Gamma function or some Bessel function), there's no smoothing of experimental noise involved and your best bet is not a least-squares polynomial fit but a minimax polynomial fit. This is a very complex subject and this is not the place to discuss it, have a look at my Minimax Polynomial Fit article in Datafile V24N5, where this is discussed at length.
Using this kind of approximation will get you the absolute optimal fit, with guaranteed smallest maximum error over the whole specified interval, no oscillations or artifacts whatsoever.
Best regards from V.
Edited: 26 Aug 2005, 7:53 a.m.
Posts: 727
Threads: 43
Joined: Jul 2005
The problem with fitting an n-th degree polynomial to n+1 data points is that it is an ill-conditioned procedure (results can fluctuate wildly for relatively small variations in the input data).
If you have a good reason (e.g., some plausible physical model) to assume fitting an n-th degree polynomial is the right thing to do, try to collect more data points and perform polynomial regression, or, even better, use Chebychev polynomials to get a fit with limited local error.
▼
Posts: 901
Threads: 113
Joined: Jun 2007
Thanks to all the contributors, and particularly to Valentin, for thorough discussions of which method to use. In my old notes I found the following items which may be of interest:
1. On selecting which degree to use: The least squares polynomial regression program that I used on the Honeywell Computer Network in the late 1960's didn't calculate the RMS of the residual errors. Rather, it calculated the unbiased standard error defined as the square root of the sum of the squares of the residual errors divided by number of data points minus the degree of the polynomial minus one.
On the difficulty of the matrices involved: We recognized that a least squares polynomial regression solution is algebraically equal to a linear solution when the number of data points is equal to the order of the matrix (degree plus one). The solutions in a computer program are not numerically equal where the least squares polynomial regression solution is typically degraded by the larger matrix elements associated with the regression.
|