▼
Posts: 325
Threads: 18
Joined: Jul 2006
Could someone with a 35S and five spare minutes use that machine to integrate FRAC(x) from 0.0 to 6.4 in FIX 2 and FIX 4 and post the results here, along with approximate run times? For the timing, an order of magnitude will be sufficient (i.e. "less than a second", "ten seconds or so", "about a minute", etc).
This isn't a trick question. The integrand is periodic and singular. It has a simple analytic solution. I'm curious to see what that machine does on a simple, naive approach to this problem.
▼
Posts: 107
Threads: 16
Joined: Jul 2007
Mine spits out 1.28 after about 2 seconds.
(No difference in result or runtime in fix 2 or fix 4.)
Edited: 21 Oct 2008, 3:38 p.m.
▼
Posts: 325
Threads: 18
Joined: Jul 2006
Thanks. That's about what I expected, given the results on other machines I've tried this on.
Unfortunately, the analytical solution gives 3.08 .
▼
Posts: 1,792
Threads: 62
Joined: Jan 2005
Kiyoshi 
Welcome back!
My first challenge was finding the FP (Fractional Part) function on the HP35s. It's under the "INTG" menu.
I was somewhat surprised that the terrible answers never got any better, and the execution time hardly increased  no matter many decimal digits were selected. The HP32S, HP15C, and HP42S gave identical results.
The problem is the discontinuities  vertical dropoffs at a single point  in the sawtoothed integrand function, of which there are six. These are not identified at all by the sampling over the entire range from 0.0 to 6.4 that includes a "partial period" from 6.0 to 6.4. However, the sampling is much closer to those singular points if the integration is performed between 0.0 and 6.0; the integration takes considerably longer and is accurate percentagewise, but not quite within the estimated error.
Of course, the best way to deal with this situation is to subdivide the integral such that the endpoints are at the discontinuities, and use Sigma+ to aggregate the results.
All of this is discussed in detail within the HP34C and HP15C handbooks, available on the MoHPC DVD/CD sets. Names of these references are listed in my one (and only) Forum Article #556:
HP SOLVE/INTEG on all RPNbased models
The methods of numercial integration on the HP35s are no different from those of predecessor models.
 KS
Edited: 22 Oct 2008, 5:26 a.m.
▼
Posts: 325
Threads: 18
Joined: Jul 2006
I'd just hoped that HP had made some improvements in the integration routine in the nearly thirty years between the 34C and the 35S :(
I originally stumbled across this when I tried integrating FRAC(x) from 0 to 6.532151583 with various integrators in the hopes of getting something approximating pi and got 1.738 with the buildin integrator instead. Not a very good approximation, is it?
Nyquist tells us we need to sample at least thirteen points. However, the first three approximations using seven points agree to ten digits, and there's no way to force the builtin integrator to use more points.
Incidentally, the TI84+SE produces a reasonable answer.
▼
Posts: 260
Threads: 21
Joined: Nov 2006
Kiyoshi,
Up to this point, I have more experience with Palm math software than HP calcs, so I hope this post helps with your integration research.
First, EasyCalc has two builtin integration functions. It can approximate integrals using Simpson's Rule:
fsimps(min:max:func{:error{:params}})
fsimps(0:6.532151583:"fpart(x)":1E6)
ans: 3.152648745845178
time: 1s (Palm TX)
, or using Romberg Integration:
fromberg(min:max:func{:degree{:params}})
(note: degree  an optional integer parameter that specifies the degree of the polynomial used to approximate the function. I experimented with various values and report the best below).
fromberg(0:6.532151583:"fpart(x)":11)
ans: 3.141452262422343
time: 1s
...
Seperately, another Palm program PowerOne Graph, uses the GaussKronrod Method, which gave best results.
fnInt ("expression"; "variable"; lower; upper)
fnInt("fPart(x)";"x";0;6.532151583)
ans: 3.141592653645
time: ~15 seconds.
Best,
PG
Edited: 22 Oct 2008, 4:50 p.m.
▼
Posts: 325
Threads: 18
Joined: Jul 2006
I believe the TI calculators use the GaussKronrod.
Personally, I use the GaussBond method, which is generally superior to the GaussKronrod, giving better results for the same amount of work (or the same results with less work). However, there is no such thing as the best method, and I know that.
As I said earlier, I am disappointed that HP hasn't made any improvements in the nearly thirty years between the 34C and the 35S.
Posts: 1,619
Threads: 147
Joined: May 2006
Why did you select 6.532151583?
What is the significance of 0.532151583 other than its (2*(Pi3))^.5?
Thanks.
▼
Posts: 1,792
Threads: 62
Joined: Jan 2005
Quote:
Why did you select 6.532151583?
What is the significance of 0.532151583 other than it's (2*(Pi3))^.5?
Egan 
I think that was the point. The value of the integral is the sum of all the right triangles, each of which has an area of (1/2)*(base)^{2}.
So, the first six triangles have a total area of 3. If the base of the last "partial" triangle is sqrt(2*(Pi3)), then its area is pi3, making the total area equal to pi, which was Kiyoshi's original intent that Pal was addressing.
 KS
Edited: 22 Oct 2008, 11:48 p.m.
▼
Posts: 1,619
Threads: 147
Joined: May 2006
I figured as much, but was curious if 6.532151583 had any other meaning.
Posts: 260
Threads: 21
Joined: Nov 2006
Karl or Egan,
Seeing Egan's conversion of the original 0.532151583 to (2*(pi3))^0.5 reminded me of EasyCalc's "GuessIt" function, which tries to return a rational form of the last answer. However, EasyCalc fails to return a number in the above case.
Some searching through HP docs revealed the 50g has a few functions, such as >Q, >Qpi, and XQ, and at hpcalc.org I found the libs QPI and QXRACINE, which all, I presume, attempt to do the same. However, none of these returned (2*(pi3))^0.5 (otherwise I would not be typing this). The best I saw was 10237/19237.
Did Egan find (2*(pi3))^0.5 with textbook methods or software is my question?
Thanks,
PG
Edited: 27 Oct 2008, 8:40 a.m.
Posts: 1,792
Threads: 62
Joined: Jan 2005
Kiyoshi 
I, too, am surprised that the HP's Romberg method sometimes returns such a wrong answer for this problem using an approach you correctly call "naive". The results depend on limits of integration  6.4 fails, but 6.39 seems to work OK, if slowly and lacking pinpoint accuracy.
I also found that the 1993 TI82 (using GaussKronrod method) handled this problem well:
fnInt(fPart(X),X,0,6.4)
yielded 3.080.
Between the discontinuities, this integrand is easy  it's a straight line with slope of +1. Specifying limits that don't span a discontinuity will give perfect answers quickly.
A "smart" approach upon encountering an illbehaved portion of the integrand would employ dense sampling there. HP's method, to my understanding, will double the number of samples over the entire integrand if more work needs to be done. That approach is logical, because if there is one problematic area, there just might be more of them.
Transformation of the integrand function to something "smoother" (if possible) and intelligent subdivision will help to give accurate answers that can be trusted.
 KS
Posts: 785
Threads: 13
Joined: Jan 2005
Exactly how did you do it?
▼
Posts: 107
Threads: 16
Joined: Jul 2007
Quote:
Exactly how did you do it?
Set fix 2 or 4, put the limits of integration on the stack (0 to 6.4), went in to equation mode and entered FP(X), then told it to integrate with respect to X.
(I had to look up the frac function in the bookI couldn't find it on the calc!)
Did I miss something?
06.39, fix 2 takes significantly longer (~15s) to integrate and returns 3.06.
06.399, fix 3 returns 3.082 in about 2 minutes.
▼
Posts: 785
Threads: 13
Joined: Jan 2005
▼
Posts: 107
Threads: 16
Joined: Jul 2007
Quote:
??
select FN= X
OK, if you want to integrate a program:
LBL B
INPUT X
FP
RTN
Select program B as the function to be integrated: FN = B
Enter the limits of integration: 0, enter, 6.4
Integrate, with respect to X
▼
Posts: 785
Threads: 13
Joined: Jan 2005
[pre]Thanks
INPUT X was missing in my trial
&)=p8!%!%#¤%&* >:(
I even translated the 35s Quick Guide to Finnish
AND
I still have problems
maybe because during the years
for my integrating and solving I have used these:
15C, 18C, 19B/BII, 28C/S, 42s, 41C/CV/CX+Advantage+..
48s/SX+Solve, 48G/GX, 48gII/49G/g+/50G, 71B+Math+..,
BUT
the 32S/SII, 33s, 35s are slightly unfamiliar to me
Posts: 1,619
Threads: 147
Joined: May 2006
Quote:
06.39, fix 2 takes significantly longer (~15s) to integrate and returns 3.06.
06.399, fix 3 returns 3.082 in about 2 minutes.
My supercharged 15C results:
0 6.39, fix 2 returns 3.01 in 0 seconds.
0 6.399, fix 3 returns 3.103 in 2 seconds.
0 6.3999, fix 4 returns 3.0854 in 7 seconds.
0 6.39999, fix 5 returns 3.08004 in 10 < min < 20 (I forgot to watch)
Looks like the 35s has improved over the 15C in this area.
▼
Posts: 107
Threads: 16
Joined: Jul 2007
Quote:
My supercharged 15C results:
0 6.39, fix 2 returns 3.01 in 0 seconds.
0 6.399, fix 3 returns 3.103 in 2 seconds.
0 6.3999, fix 4 returns 3.0854 in 7 seconds.
0 6.39999, fix 5 returns 3.08004 in 10 < min < 20 (I forgot to watch)
Looks like the 35s has improved over the 15C in this area.
I just tried 0 6.39999, fix 5: the 35S failed at about 9 minutes with a "nonexistent" error.
▼
Posts: 1,619
Threads: 147
Joined: May 2006
Quote:
I just tried 0 6.39999, fix 5: the 35S failed at about 9 minutes with a "nonexistent" error.
A hardcoded timeout perhaps?
Posts: 260
Threads: 0
Joined: Oct 2008
Hi,
Interesting problem du to the discontinuities that produce unexpected results into the integration algorithm.
Apparently, the "new" integration algorithm has the same difficulties on RPL calculator!
I just try the suggested values on my HP28S and get the following results, which are close to the observations made on RPN's one.
3: \<< FP \>>
2: { 0 6.4 } 2: 1.2800
1: .0001 \.S> 1: 0.0001 in less than 1s.
A bad (but fast) estimation 1.28 instead of the expected 3.08 !
But the worse is just coming:
3: \<< FP \>>
2: { 0 6.39 } 2: 3.0763
1: .0001 \.S> 1: 0.0003 in nearly 7 min.
Better guess but a bit long (more than 7 min !).
3: \<< FP \>>
2: { 0 6.399 } 2: 3.0792
1: .0001 \.S> 1: 0.0003 in more than 32 min.
Best guess but far longer...
I am not convince that the integration routine is different on RPL calculator compare to the previous RPN version...
It takes so much time, that I had time to type an Simpson’s method on my old SHARP PC1211 ( Approximate definite integral by Simpson’s rule  P4A11 – p.2526 SHARP Pocket Computer PC1211 Applications Manual – SHARP CORPORATION , Osaka, Japan 1F10T(1010) )
By this method, on my old pocket, I get a reasonable answer 3.072 in less than 3 min (100 divisions).
Posts: 1,368
Threads: 212
Joined: Dec 2006
I have a little Mathematica program that does Romberg Integration the straightup textbook wayrepeated applications of the Trapezoid Rule followed by Richardson Extrapolation.
The routine spits out 1.28 for your problem!
The Romberg routine encoded in our HP calcs for the past few decades just doesn't agree with this problem.... I wonder where it breaks down?
Les
▼
Posts: 325
Threads: 18
Joined: Jul 2006
The Romberg algorithm breaks down in this case in the spurious agreement of early results. Taking the "straightup textbook way":
F_{0} = FRAC(0.0) = 0.0
F_{1} = FRAC(1.6) = 0.6
F_{2} = FRAC(3.2) = 0.2
F_{3} = FRAC(4.8) = 0.8
F_{4} = FRAC(6.4) = 0.4
T_{1} = (F_{0}+F_{4})*(6.4/2) = 1.28
T_{2} = (F_{0}+2F_{2}+F_{4})*(6.4/4) = 1.28
Since these two values agree, the algorithm "decides" there's no need to continue. Had it continued, it would have found that:
T_{3} = (F_{0}+2(F_{1}+F_{2}+F_{3})+F_{4})*(6.4/8) = 2.88
which doesn't agree with the previous value and thus gone on, using more and more function evaluations, until the estimates (aided by the Richardson Extrapolation) converge. This is the behavior seen when the upper limit is changed to 6.3, 6.39, 6.399, etc.
In the case of the builtin schemes of all HPs (with builtin schemes, of course), RPL and RPN alike, a nonlinear transformation is performed in an usually successful effort to avoid the effects of periodic functions. In this particular case it fails, as the third estimate also agrees with the first two, "convincing" the scheme that it has found the correct answer.
I didn't start out to find this. As I stated earlier, I was simply comparing various schemes on an integral that should have evaluated to approximately pi. It was only when the builtin scheme came out that wildly off that I dug in deeper.
Of course, the "correct" way to numerically evaluate this integral, assuming one had to do it numerically, is as:
6*INTG(0,1,FRAC(x),x) + INTG(0,0.4,FRAC(x),x)
exploiting the periodicity and at the same time removing the singularities.
▼
Posts: 1,792
Threads: 62
Joined: Jan 2005
Kiyoshi 
Nice explanation. I'm sure that this particular example was not anticipated by the original designers.
Quote:
In the case of the builtin schemes of all HP's (...), RPL and RPN alike, ...
There's at least one HP model with factoryprovided numerical integration as a standard feature in its ROM, but it's not RPN/RPL and is not microcoded, either.
The AOS HP20S offers rootfinding and Simpson's Rule integration as loadable AOS keystroke routines. The results get close to the correct answer with a substantial number of samples (even number only), and improve as more samples are taken, but it just can't seem to settle on the exact answer of 3.08.
10 samples: 3.41
20 samples: 3.20
64 samples: 2.88
100 samples: 3.072
200 samples: 3.072
650 samples: 3.08512820513
2000 samples: 3.07413333333
The HP20S rootfinding and integration capability is unintuitive and somewhat impractical. Say, for example, the user programs a function to evaluate it at a few points, then decides to rootsolve or integrate the function. Loading the "root" or "int" program will clear program memory, wiping out the program containing the user's function in the process.
I've wondered whether other algorithms, such as the GaussKronrod that TI uses, could be adapted by HP while incorporating its traditional refinements (generally not evaluating at endpoints, irregular sampling).
 KS
