Uncertainty and accuracy for numerical integration



#27

A recent thread involved a redux of my "Area between two curves" challenge for RPL. James Prange posted an HP-48 solution in which the uncertainty of the input function was specified by setting the display mode. Valentin Albillo questioned the clarity of this, as opposed to the syntax of "INTEG" on the HP-71B Math ROM:

Quote:
If you want to specify a particular precision, why, just do it, include the desired precision as a parameter to the integration process, which is just what my BASIC version does. Remember, we're talking about someone who's trying to fathom what the code does, and changing display modes instead of explicitly specifying a desired accuracy seems much more obscure, arbitrary, and less clean to me.

To which James responded,

Quote:
Now there's a point that I heartily agree with you on! Assuming any association between the current display mode and the desired accuracy is pure nonsense to me.

Interestingly, in the 28 series, the accuracy is a parameter for the integral function, and even the order of the parameters makes more sense to me. What ever possessed them to change this for the 48 series is beyond me.


A bit of background: The method of setting the display mode to specify the uncertainty of the input function to be integrated, was introduced on the original implementation of INTEG on the HP-34C, and carried over to other RPN implementations of the INTEG function: the HP-15C, HP-41C Advantage Pac, HP-32S. Although not entirely logical, it was expedient and saved a stack argument or setting function.

This prompted me to consult the respective manuals to see how the input-function uncertainty for INTEG is specified on various models:

34C, 15C, 41C* Advantage Pac, 32S, 32SII, 33S, 48*, 49*:

Use FIX to specify uncertainty in a particular decimal digit; use SCI or ENG to specify uncertainty in a particular significant digit.

28C, 28S:

Specify uncertainty in a particular decimal digit as a numerical parameter to the INTEG function.

42S, 71B Math ROM:

Specify a relative uncertainty as a per-unit fraction of the function-value magnitude, as a numerical parameter to the INTEG function.

The latter (42S, 71B) approach seems to be the ideal one: flexible and scalable, without the possible discontinuities inherent to using SCI for relative uncertainty. Calculation of the estimated error is a bit more complicated.

-----------------------------

Excellent and detailed technical documentation on this topic is found in the HP-34C Owner's Handbook (Section 9 and Appendix B). The same material is essentially repeated in the HP-15C Owner's Handbook (Section 14 and Appendix E).

The HP-15C Advanced Functions Handbook (Section 2) contains further in-depth discussions of pitfalls and techniques for numerical integration.

This material makes one appreciate the diligent effort and high quality of HP's product documentation in the late 1970's to mid-1980's. The same topics are not covered or written as well in the HP-28C and HP-42S manuals that followed shortly thereafter.

-- KS

Edited: 28 Nov 2005, 4:14 a.m.


#28

Hi, Karl:

Karl posted:

"Although not entirely logical, it was expedient and saved a stack argument or setting function."

    Yes, but using a stack argument or setting function (i.e: STO UNC, say, similar to STO RAN#) would've been preferable in the HP-15C (and absolutely mandatory in all RPL models) to the utter preposterousness of having to set a display mode to specify the uncertainty in your integration, with the side effect of arbitrarily changing the user's display setting, which perhaps you can't determine in advance in order to restore it later, thus changing it for good, whether the user wanted it or not.

    That's called a side effect though I'd call it collateral damage instead. Also, some people might be tempted to excuse this unacceptable behavior by saying that, by doing this, you'll be seeing your results in the proper display setting to look only at truly significant digits, so that SCI 5 would have you looking at a result more or less exact to the 5 digits after the decimal point you're seeing displayed.

    But adding insult to injury the HP-15C, for example, allows you to specify a negative number of digits for SCI, down to and including -6 (!!), which will affect the uncertainty you get.
    Now try and argue how the "SCI -6" mode you've just set up affects the display of the results you're seeing, and how that correlates with the number of guaranteed correct digits in the result you're just looking at.
    How many correct digits do you get with SCI -1 ? And with SCI -6 ? How many correct digits do appear in the displayed results in said "modes" ?

    Nonsense. INTEGRAL( a, b, uncertainty, f(x) ) or the equivalent in RPN or RPL is the way it should've been from the start, and certainly this abomination should never have been carried on to much more capable models.

Best regards from V.

#29

it should be mentioned that the supplied uncertainty is an error measure and is not guaranteed. that’s assuming the quadrature is implemented as some form of sampling – which most numerical ones are.

in fact, there should really be two parameters, an error measure and a confidence eg. the relative error or number of digits and the number of standard deviations for which this error applies.

the excitement of numerical quadrature on the 34c lead the manual to make this mistake in its claim of an upper error bound. this slip-up was later corrected in the 15c manual which states the error bound is only highly probable.

Edited: 28 Nov 2005, 3:41 p.m.


#30

hugh stated,

Quote:
it should be mentioned that the supplied uncertainty is an error measure and is not guaranteed. that’s assuming the quadrature is implemented as some form of sampling – which most numerical ones are.

The ultimate purpose of the user-supplied uncertainty/accuracy/tolerance/error (call it what you will) is to provide a quantifiable measure that indicates to the calculator how extensively it should perform its quadrature. The more samples it takes, the more accurate the estimate of the integral can become. However, if the incremental change in the integral result after additional intelligent sampling is well within the estimated error based upon the "accuracy" of the input function, then there's not much purpose in continuing the quadrature process further.

On the 34C, 15C, 32S, 32SII, (and 33S), the input-function accuracy is indeed represented as an uncertainty. The user specifies the decimal digit (FIX) or significant digit (SCI/ENG) that is to be treated as uncertain. This means that the rounded value of the function displayed in that format is the closest possible to the "actual" value, but that the uncertain digit could actually be up to "half a digit" larger or smaller. An upper bound of the estimated error is calculated accordingly, as described on pp. 245-248 in the HP-15C manual, and pp. 244-247 in the HP-34C manual.

On the 42S and 71B Math ROM, the user specifies an error or tolerance of the input function as a per-unit fractional value. This is probably the ideal solution.

Quote:
in fact, there should really be two parameters, an error measure and a confidence eg. the relative error or number of digits and the number of standard deviations for which this error applies.

Looks like three "error" parameters to me! :-)

Well, remember that the function being integrated might be an exact mathematical function that has no inherent measurement error or uncertainty. Moreover, to introduce sophisticated statistical analysis into the INTEG function would seem like overkill to me (and probably to most users).

Quote:
the excitement of numerical quadrature on the 34c lead the manual to make this mistake in its claim of an upper error bound. this slip-up was later corrected in the 15c manual which states the error bound is only highly probable.

Could you specify the page numbers in the respective manuals that contain the verbiage in question? Most likely, it was a case of HP's correcting imprecise verbiage, rather than retreating from unfounded claims.

Only if the integral is calculated intelligently, with samples taken in the right places, can the estimate of the integral be accurate. There is a good example of this on pp. 257-267 of the HP-34C manual, treating f(x) = xe-x, as well as a technique (subdivision of the integral) describing how to compute it properly. The same example is given in the HP-15C manual, starting on p. 250, but the "computing technique" was exported to the Advanced Functions Handbook (AFH).

My favorite example is that of integrating

f(x) = sqrt(x)/(x-1) - 1/ln(x)
between 0 and 1.

It was originally presented in the HP Journal article from 1980 describing the INTEG function on the HP-34C. The example was later presented in the HP-15C AFH, the HP-71B Math ROM manual, and probably others...

Best regards,

-- KS


Edited: 29 Nov 2005, 5:38 a.m.


#31

Hi Karl,

this is an interesting Task!

Quote:
My favorite example is that of integrating
f(x) = sqrt(x)/(x-1) - 1/ln(x)
between 0 and 1.

I tried it on several calcs I own:

The Simvalley GRC-1000 is a Chinese calc which is obviously a Casio design (but not available from them.) The GRC-1000 fails on the boundary values so that I had to restrict the area to [1E-6,1-1E-6]. The integration function accepts an input parameter N which subdivides the interval in 2^N steps. So it's up to the user to guess the accuracy. There is a default for N but it is not given in the manual. "Accuracy" is awful (never better than 2 digits). Moving the boudaries closer to [0,1] quickly produces "MA Error".

The Sharp EL-9200 has similar problems, I had to reduce the boundaries to [1E-9,1-1E-9] and the accuracy, even with 512 intervals, is just 2 digits.

My TI-84 Plus performs better. It does not stumble over the boundary values and has a separate argument for the tolerance with a default of 1E-5. The manual does not tell anything about the algorithm and what the tolerance exactly means. The TI is fast, even with a tolerance of 1E-8!

The TI Voyage 200 (and its siblings) has no means of selecting the desired accuracy, the manual tells us that 6 significant digits are the target. The boundaries [0,1] pose no problems. Calculation time depends on whether nInt or the integration symbol is used (even in approximate mode.) The TI falls back to numerical integration in either case.

Both the HP 33s and HP 49g+ in approximate mode perform in a similar fashion; they probably share some code. They have no trouble with the boundaries and return the same results. The 49g is much faster than the 33s but still slower than the TI. A display setting of SCI 5 seems to produce the same results as a tolerance of 1E-6 for the TI. If I change the display setting to SCI 8, the 49g+ takes minutes to find a result.

In exact mode the HP 49g+ returns minus infinity.

Marcus


#32

Hi, Marcus:

Nice results. Do any of the machines tested come up with the
correct 12-digit solution, namely 0.0364899739786+ ?

By the way, though this particular integral is nice and such, Mr. Kahan did not try the hardest examples at all. One of the hardest I know of would be this one:

  Integral between 0 and 1 of f(x) = Cos(Ln(x)/x)/x

= 0.3233674316 7777876139 9370087952 1704466510+

if in doubt, just try and see how many correct digits do you get.

Best regards from V.


#33

Hi Valentin,

Quote:
Do any of the machines tested come up with the correct 12-digit solution, namely 0.0364899739786+

I had an answer prepared but must have managed to *not* post it correctly :-(

Here are some results:

Model            Tolerance        Result
----------------------------------------------------------
TI-84 Plus 1E-13 0.03648997398
TI-86 1E-13 0.0364899739787
TI Voyage 200 N/A 0.036489973974202
HP 48G / 49g+ SCI 12 0.0364899739831
HP 33s SCI 9 0.03648997401
----------------------------------------------------------

Italic digits are correct if rounding is taken into account.

The TIs are much, much faster than the HP models (about an hour for the 48G!). The Manual for the Voyage 200 (which is essentially identical to the TI-89 and TI-92Plus) claims a target accuracy of 6 digits. The results are better than that.

The Casio and Sharp models weren't better than 2 significant digits. They all failed on the boundary values. This holds for the Sharp PC-E500S with its "engineering software", too.

Marcus

Edited: 30 Nov 2005, 2:51 p.m.

#34

Marcus --

The ones that took 2^n or 512 steps (and failed at the endpoints) are probably implementing Simpson's Rule. That's what my old Casio fx-3600P does. Evaluation at endpoints is required by the rule, but is often problematic in practical applications.

I had benchmarked a few HP calculators for this problem, several years ago.

  • The less-advanced models (e.g., 15C, 34C, 32SII, 41C Advantage) give an answer outside the estimated solution error at FIX 5, but an answer within bounds for FIX 6.

  • The more-advanced models (42S, 28/48/49) give an answer within bounds for the equivalent or approximation of FIX 5, and a slightly more-precise answer for "FIX 6".

I did not have a 71B Math ROM at the time.

-- KS

#35

hi karl,


i am not interested in the debate of how the uncertainty is input to the machine, but if you think that setting FIX will be sure to give you the same number of accurate digits you are wrong. because it’s a sampling problem, and this is what i'm trying to point out in my comments.

i don’t like to use the word “certainty” or “uncertainty”. i prefer to refer to a measure of error and a measure of confidence. the problem with the word “certainty” is that it is sometimes interpreted as an error and sometimes as a confidence. really, there are three values at work here for the calculators response; its approximation, its estimate of the error and its confidence. i'm saying to have full control over the approximation, you need to have input both an error measure as a tolerance and also a confidence.

let me give you an example, consider the polynomial,

f(x) = [(256x-11)(256x-81)(256x-175)(256x-245)(32x-5)(32x-27)(2x-1)]^2

now, integral(f(x), 0, 1) = 762676910510443067+2224/4095

however, put this into an hp15c and select FIX 9 and integrate between 0 and 1, and you get 0.000000000 with an “error” of 0.000000001. this is clearly wrong. in fact its completely wrong. what went wrong here?? the problem was that there is no means to input confidence and the calculator got itself pretty confident when it figured the function was zero at all sample points and stopped. remember i've already input the error tolerance as FIX 9. if i could enter a confidence as well, maybe i could persuade it to perform a few more samples after which it would find some non-zeros and start the job proper.

also fyi, here are the references you asked for, the hp34c owners handbook, p209 (footnote)

Quote:
...the algorithm in your hp34c computes an “upper bound” on this difference, which is the uncertainty of the approximation. bla bla… This means that while we don’t know the exact difference between the actual integral and its approximation, we do know that the difference is no bigger than 0.0001

but later in the 15c owner’s handbook, p201 (similar footnote)

Quote:
… But the algorithm in the hp15c estimates an “upper bound” on this difference, which is the uncertainty of the approximation. bla bla... This means that while we don’t know the exact difference between the actual integral and its approximation, we do know that it is highly unlikely that the difference is bigger than 0.0001

happy hacking.


#36

hi hugh

do you have a shift key on your keyboard?? there's not enough effort in your typing, or formulation of your thoughts.

Quote:
i am not interested in the debate of how the uncertainty is input to the machine, but if you think that setting FIX will be sure to give you the same number of accurate digits you are wrong.

When did I say that (whatever it means)?

Quote:
i don’t like to use the word “certainty” or “uncertainty”. i prefer to refer to a measure of error and a measure of confidence. the problem with the word “certainty” is that it is sometimes interpreted as an error and sometimes as a confidence. really, there are three values at work here for the calculators response; its approximation, its estimate of the error and its confidence.

These sound like three output values.

Quote:
i'm saying to have full control over the approximation, you need to have input both an error measure as a tolerance and also a confidence.

If applied to the input function, it sounds like "fuzziness", e.g., "The actual value of function will be within 0.7% of the calculated value, but I'm only 35% confident of that..." What is the calculator supposed to do with that?

Quote:

f(x) = [(256x-11)(256x-81)(256x-175)(256x-245)(32x-5)(32x-27)(2x-1)]^2
now, integral(f(x), 0, 1) = 762676910510443067+2224/4095

however, put this into an hp15c and select FIX 9 and integrate between 0 and 1, and you get 0.000000000 with an “error” of 0.000000001. this is clearly wrong. in fact its completely wrong. what went wrong here??


Well, I evaluated the function, but didn't try to integrate it. The function values f(x) routinely exceed 1010 (and even 1012) over most the region of integration, with roots f(x)=0 scattered at points in between. I couldn't expect what result would be obtained.

Quote:
the problem was that there is no means to input confidence and the calculator got itself pretty confident

I guess the calc came equipped with a lifetime supply of confidence, built right in! :-)

Quote:

...the algorithm in your hp34c computes an “upper bound” on this difference, ... we do know that the difference is no bigger than 0.0001

... the algorithm in the hp15c estimates an “upper bound” on this difference, ... we do know that it is highly unlikely that the difference is bigger than 0.0001


The statement in the 15C manual is more precise: it acknowledges that the upper-bound estimate of integral error is contingent upon the ability to evaluate the integral intelligently. They knew full well that poor selection of limits by the user could preclude that, as evidenced by the example of integral (f(x) = xe-x from 0 to 1).

-- KS


#37

hello again,

perhaps i'm not being clear enough. i'm suggesting that any meaningful measure of the error in a sampling problem like numerical quadrature requires a measure of the confidence. to be clear, there are two other numbers than the resultant approximation. these are an estimate of the error and the measure of confidence.

now depending on your implementation, these two values can be either input, output or both. in the case of hp you have only an input of the error tolerance by indirect means and you couldn’t specify the confidence. consequently, this can lead to bogus results (eg. previous example).

alternatively, the confidence could be an output value. for example, my polynomial is zero at the first 7 sample points chosen by the hp algorithm. that’s not really much confidence in the function being zero everywhere. thus its statement of the error as 10^-9 is highly uncertain – but it doesn’t tell you this.

for my example, if you dont like the large numbers, you could simply scale the coefficients down by dividing them all by 100 or something like that to bring it into line.

i’d like to restate my main thesis for clarity;

the upper bound estimate given by the hp algorithm is not just contingent on the ability to evaluate the integrand but is MATHEMATICALLY MEANINGLESS without reference to a properly defined statistical confidence which is absent in their analysis.

in the hp qualification of their error estimate, what precisely does the phrase “highly unlikely” actually mean. in point of fact, you are never assured the accuracy of *any* of the resultant digits irrespective of the given error estimate.

happy hacking.

Edited: 29 Nov 2005, 8:23 p.m.


#38

hello, hugh --

Quote:
perhaps i'm not being clear enough. i'm suggesting that any meaningful measure of the error in a sampling problem like numerical quadrature requires a measure of the confidence. to be clear, there are two other numbers than the resultant approximation. these are an estimate of the error and the measure of confidence.

now depending on your implementation, these two values can be either input, output or both.


Well, I think that you'd better choose whether confidence should be an input or an output. If it is an input, the user is virtually compelled to be a statistician just to compute a numerical definite integral. This would not be what H-P had in mind.

Quote:
in the case of hp you have only an input of the error tolerance by indirect means and you couldn’t specify the confidence.

The HP-42S and HP-71B Math ROM let the user specify a numerical value of calculated input-function tolerance as a per-unit fraction. The "actual" value of the input function is assumed with 100% confidence to be within this margin.

Quote:
my polynomial is zero at the first 7 sample points chosen by the hp algorithm. that’s not really much confidence in the function being zero everywhere. thus its statement of the error as 10^-9 is highly uncertain – but it doesn’t tell you this.

So that's where it came from! A contrived function that deliberately deceives the algorithm. Based on only seven samples of zero, what else would or could the algorithm tell you?

This illustrates why the Nyquist minimum-sampling rate for correct reconstruction of a waveform is more than twice the highest frequency component. What if a pure sinusoidal waveform were sampled exactly at each zero crossing? The calculated rms value and frequency content would be zero -- a clearly erroneous result.

Quote:
the upper bound estimate given by the hp algorithm is not just contingent on the ability to evaluate the integrand but is MATHEMATICALLY MEANINGLESS without reference to a properly defined statistical confidence which is absent in their analysis.

in the hp qualification of their error estimate, what precisely does the phrase “highly unlikely” actually mean. in point of fact, you are never assured the accuracy of *any* of the resultant digits irrespective of the given error estimate.


Well, gosh. Is a correct answer -- which INTEG will give reliably -- generated by a reasonable and methodical process ever "mathematically meaningless"?

The HP-34C manual that you have contains much discussion about user responsiblities in posing the integration problem. If the function is flaky, the user should structure the problem so that the calculator can handle it properly.

What answer do you get if you integrate the function in sections, between each of its root values of zero? (Use SCI or ENG).

I got 2.80456*1017 with estimated error 6.61428*1011 for the first interval of 0 to 11/256 (= 0.0429688), using SCI 5.

That seems reasonable to me. Maybe you could do the other seven intervals...

-- KS


Edited: 30 Nov 2005, 3:11 a.m.


#39

hi karl,

you are right. the example is contrived to be exactly zero at the 7 sample points. if you break it up at all, even into two arbitrary halves, say, you will then get the correct answer from the calculator. this is what you have shown.

#40

Hello again, Valentin --

You stated,

Quote:
Yes, but using a stack argument or setting function (i.e: STO UNC, say, similar to STO RAN#) would've been preferable in the HP-15C (and absolutely mandatory in all RPL models) to the utter preposterousness of having to set a display mode to specify the uncertainty in your integration, with the side effect of arbitrarily changing the user's display setting, which perhaps you can't determine in advance in order to restore it later, thus changing it for good, whether the user wanted it or not.

Certainly, from the standpoint of mathematical "directness", it is not ideal to equivalence a display mode to the uncertainty/tolerance/accuracy/error (call it what you will) of an input function. "Utter preposterousness", though, seems rather harsh.

Consider the practical implications of adding a third numerical input to a function on a calculator having a fixed 4-level stack and no menus. The argument would have to be placed and recalled from its own storage register, or on the stack, as you correctly stated.

  • If the argument were placed in its own storage register, a place on the keyboard would be needed for it. There just weren't any spare locations on the 34C or 15C keyboard. Moreover, what if the calculator did not prompt for the uncertainty value, or require it in order to complete the INTEG command? (Consider INTEG in a program.) The user would then run the risk of unwittingly running INTEG with a poorly-matched uncertainty value, thus obtaining inaccurate results, and being none the wiser.

  • If the argument were placed on the stack, that would make three input stack arguments, which would violate what might have been an HP 4-level RPN design principle -- "no function shall require more than two input arguments from the stack, or shall place more than two results onto the stack." Users may find it difficult to remember where the third argument should go -- in the z-level or x-level. Getting it wrong would cause incorrect results -- after waiting up to several minutes for them. Moreover, the third argument could not be preserved on the stack (as the two limits of integration are). FIX/SCI/ENG give a visible indication of input-function uncertainty, and the mode is static.

The "special storage register" approach would have one subsequently-developed analogy -- the "FN=" command on the 32S and 32SII. I admit to having solved or integrated (or attempted to) the wrong program after not having specified the correct one with "FN=". ("FN=" was necessary on the 32S/32SII when the identifier completing SOLVE or INTEG became the variable, not the program label, as on the 34C and 15C.)

I can think of only one exception to the "two stack inputs, two stack outputs max" rule on RPN models -- "STO g {matrix ID}" on the HP-15C, which has three stack inputs. I admit to having re-consulted the manual to use that one correctly...

Finally, if one numerical argument for input-function uncertainty were to be input, it ought to be the "ACC" value of the 42S, or the error tolerance of the 71B Math ROM. Since the 42S has a named and menu-displayed variable "ACC" for it, and the 71B uses argument-list checking, this input argument is handled well on those models.

The point of my short essay was that HP thought through the details and practical implications of doing things a certain way on their calculators, considering how the typical user would utilize them. They didn't make very many mistakes, and -- all things considered -- I don't think they made one here, either.

Quote:
But adding insult to injury the HP-15C, for example, allows you to specify a negative number of digits for SCI, down to and including -6 (!!), which will affect the uncertainty you get. Now try and argue how the "SCI -6" mode you've just set up affects the display of the results you're seeing, and how that correlates with the number of guaranteed correct digits in the result you're just looking at. How many correct digits do you get with SCI -1 ? And with SCI -6 ? How many correct digits do appear in the displayed results in said "modes" ?

I don't quite follow this. I see no mention of negative arguments to FIX/SCI/ENG in my HP-15C manuals. The user cannot successfully enter [SCI][CHS][6] or [SCI][6][CHS], and [SCI][I] with a negative value in the I-register seems to yield the equivalent of [SCI][0].

Quote:
...INTEGRAL( a, b, uncertainty, f(x) ) or the equivalent in RPN or RPL is the way it should've been from the start, ...

Well, I disagree, for the reasons stated above. (This applies only for the models without static menus or alphanumerics -- 34C, 15C, 32S, 32SII). Practical considerations must trump ultra-orthodox mathematical purity, in my book, as long as computational accuracy is not compromised. It's good to do things the "right and rigorous way" and to give users maximum flexibility wherever possible, but the tool must have the necessary capabilities in place first.

Quote:
...and certainly this abomination should never have been carried on to much more capable models.

On the 42S, it wasn't. But, the 42S had the named variables, static menus, and more computing power to incorporate "ACC" properly. I agree with you (and James Prange) that the 48/49 should have had "ACC" (or equivalent) as a stack argument for INTEG.

Best regards,

-- KS


Edited: 29 Nov 2005, 5:13 a.m.


#41

Hi, Karl:

Karl posted:

""Utter preposterousness", though, seems rather harsh."

    Not to me. Having a display mode control a parameter for numerical integration is utterly preposterous. Specially if you're
    using software not written by yourself that all of a sudden changes your display format for its own purposes and then doesn't restore it to the setting you preferred (because it can't). What a nuisance !
"If the argument were placed in its own storage register, a place on the keyboard would be needed for it. There just weren't any spare locations on the 34C or 15C keyboard.
    Yes they were. In the case of the HP-15C, for instance, you could have STO INTEG, similar to STO #RAN, say. The combination STO INTEG wasn't used for anything, so it was free for this purpose. I'm sure this same sequence could be used for the HP-34C as well. Further, RCL INTEG would recall to the X-register the uncertainty. This would be even better when using INTEG programmatically, as you would be able to manipulate the uncertainty as a *value* with any arithmetic or mathematical functions, such as:
        X<=0?         enough accuracy ?
    GTO C yes, go on
    RCL INTEG no, recall uncertainty
    2
    / halve it
    STO INTEG specify new uncertainty
    GTO B recalculate the integral
    Just try and do the same with 'display modes' and then tell me what you think it's the logical, natural way and what's the preposterous way.
"Moreover, what if the calculator did not prompt for the uncertainty value, or require it in order to complete the
INTEG command? (Consider INTEG in a program.) The user would then run the risk of unwittingly running INTEG with a poorly-matched
uncertainty value, thus obtaining inaccurate results, and being none the wiser."

    Karl, please, we're talking about *HP* calculators of the past, which were bought mainly by knowledgeable engineers and very bright people in general, what with those prices ! So, some brains on the part of the user were taken for granted. Your argument applies equally well to any and all functions requiring some inputs on the stack, and none of them prompted the user as far as I know. Say, for instance, rectangular <-> polar conversions. What if you forgot to include the theta value in Y ? Well, you wouldn't get your results, that's what, and you'd learn how to do it properly, and that's all.
    Frankly, I can't see what's difficult to remember about the stack sequence:
         T:  uncertainty
    Y: b
    X: a
    being set up before pressing INTEG. Or any other other you deem more natural.
"If the argument were placed on the stack, that would make three input stack arguments, which would violate what might have been an HP 4-level
RPN design principle -- "no function shall require more than two input arguments from the stack, or shall place more than two results onto the
stack."

    Your research is incomplete or your memory is faulty, there are quite a number of exceptions, including the one you mention later. Want another very obvious one? That would be the results of SOLVE, if you care to have a look at the manuals: upon termination, SOLVE places the computed root in X, the previous estimate in Y, and the computed value of your function at the computed root in Z. That's *three* results for you.

    Also, what's the point in not using three levels for inputs to INTEG on the HP-34C or HP-15C ? Because none of the stack contents makes it alive to your f(x), matter of fact INTEG and SOLVE fill up all 4 stack levels with the current value of X, for your f(x) to act upon it. So all previous contents are utterly lost. And it goes without saying than upon termination, the initial stack contents before calling SOLVE/INTEG can't be saved at all, not even one level, because your f(x) computation would be likely to use them all, right ?

    In other words, once you call INTEG/SOLVE, your original stack is absolutely obliterated, so you can use two registers for parameters, or three, or all four, it makes no difference, none will survive when your f(x) is called, not to mention when the process terminates.

"Users may find it difficult to remember where the third argument should go -- in the z-level or x-level. Getting it wrong would cause incorrect results"
    I can paraphrase your statement word for word using SOLVE instead
    and a lot of other stack-using commands. In the world of stack-based RPN, there are a number of things you should remember, and you eventually do, we're talking engineers and power users here. And after all, why do you think HP included the "stack-syntax" of a number of commands (such as polar <-> rectangular) in the back labels ? Having the uncertainty in the stack would be consistent with this as well.
"They didn't make very many mistakes, and -- all things considered -- I don't think they made one
here, either."

    I fully disagree, they did. And they did carry this nonsense to INTEG in advanced RPL models ! Unbelievable !
" I don't quite follow this. I see no mention of negative arguments to FIX/SCI/ENG in my HP-15C manuals."
    Another example of either faulty memory or incomplete research. Have a look at the big footnote on the subject, which is at page 247 in the HP-15C's Owner's Handbook, at least it is in my Rev. G manual.
"It's good to
do things the "right and rigorous way" and to give users maximum flexibility wherever possible, but the tool must have the necessary capabilities in place
first."

    If by "the tool" you mean the HP-15C or HP-34C, they certainly had the necessary capabilities, namely a 4-level stack, which could easily accommodate three parameters for INTEG, and/or a prefix-capable keyboard, which could accommodate STO INTEG and RCL INTEG easily.
    And if you're referring to RPL models, with unlimited stack levels, it goes without saying that they could (and should!) too.
Best regards from V.

Edited: 29 Nov 2005, 6:20 a.m.


#42

Quote:
and then doesn't restore it to the setting you preferred (because it can't).
RCLF and STOF are here for that purpose. The following program should not alter your calc in any way.
<< RCLF 5 FIX STOF>>
Arnaud

#43

Hi, Arnaud:

"RCLF and STOF are here for that purpose. The following program should not alter your calc in any way. "

    You don't suppose for a moment that I don't know about them, do you ? The problem is that RCLF and STOF are particular functions present in a particular model or ROM, they can't be found on the HP-34C or HP-15C, which are the models where INTEG/SOLVE were pioneered, nor are there equivalent functions or capabilities.
Best regards from V.


#44

Sorry, I was confused and should have looked at the subject and the poster... I will delete the message if requested.

Arnaud


#45

Hi, Arnaud:

Arnaud posted:

"I will delete the message if requested."

    I think it would be better to leave it posted. Though not relevant for this topic, it might be the case that someone would find it useful to know that they can store and later recall some flags this way.
Best regards from V.
#46

Hi, Valentin --

I see your points, but remain unswayed in my convictions regarding this subject. I'll continue to discuss the specific points, but first, there's a more general point I'd like to make:

You and I both agree that the HP-15C was an utterly remarkable achievement of engineering design -- an extremely sophisticated tool that was nonetheless very accessible and straightforward to use. This was achieved, I believe, because the machine was so well organized, logical, and intuitive. You have emphasized the importance of intuitiveness in previous posts, and I heartily concur.

Even though the manual was extensive, once a user familiarized himself, it was seldom needed thereafter -- even for the advanced functions. The rare items that are not very intuitive on the HP-15C are the ones that get the most questions from users in this Forum, i.e.: "How do I get out of Complex Mode?" [g CF 8]; "How do I change the comma to a decimal point?" [ON/.]

That being said, there's always a clever way to do things, and sometimes these must be employed when the more-direct way isn't quite possible. However, doing things "cleverly" but not intuitively may lead users astray, and leave them frustrated or dismayed.

Keeping that theme in mind, here are my responses:

Quote:
Having a display mode control a parameter for numerical integration is utterly preposterous. ... What a nuisance !

That's a better term, I think: "nuisance", or "annoyance". Display mode is of interest for interactive work, and the user can reset it if needed. It's nice when this is a 3-keystroke combination (e.g., HP-15C) instead of 4 (HP-32SII) or 5 (HP-42S). I use FIX/SCI/ENG frequently.

Quote:
In the case of the HP-15C, for instance, you could have STO INTEG, similar to STO #RAN, say. The combination STO INTEG wasn't used for anything, so it was free for this purpose.

OK, here's where "intuitiveness versus cleverness" comes into play.

  • STO RAN # is intuitive: "Store x-register value as the seed for next random number".
  • STO RESULT is intuitive: "Store matrix defined by descriptor in x-register to the designated 'result' matrix."
  • STO n, STO I, STO (i), STO MATRIX {}, etc. are all intuitive.

Now, STO Syx ("INTEG") would be, what -- "Store the calculated integral"? (Probably wouldn't want to unwittingly use that stored value as the input-function error for the next integral.) What if the user mistakenly hit [STO][INTEG], then wondered why subsequent calculated integrals were so coarse?

Sure, STO INTEG would have been a clever way to provide a storage location for a parameter, but it would also provide a potential pitfall for users, even the knowledgeable and very bright ones.

How about placing the tolerance/accuracy on the stack? Well, some users may not be particular about specifying the value, preferring to use a reasonable default. That approach won't work if the value is taken from the stack, because its omission will almost certainly cause garbage results.

However, a FIX/SCI/ENG display setting is always in effect, and is plainly visible in interactive ("RUN") mode. Settings favored by most users will yield a reasonably accurate estimation of the integral, when interpreted as an input-function uncertainty. (For example, FIX 5 says that the uncertainty of f(x) is less than 0.000005 at all points.)

Quote:
Say, for instance, rectangular <-> polar conversions. What if you forgot to include the theta value in Y ? Well, you wouldn't get your results, that's what, and you'd learn how to do it properly, and that's all.

Again, intuitiveness: Any competent user of R->P or P->R conversions knows that they require two inputs and provide two outputs. A diagram and stack table is provided on the back plate to help users get the order correct. The tables also provide references for multi-output functions that are not entirely intuitive.

About those tables on the back plate: Wouldn't it make more sense for "L.R." to also return the correlation coefficient "r" (instead of "y-hat, r" doing so? But that would return three results for a function that required no stack inputs, pushing more of the user's data off the stack. And, notice that no function requires three stack inputs or produces three stack outputs. A design principle, maybe?

Quote:
Frankly, I can't see what's difficult to remember about the stack sequence:

T: uncertainty

Y: b

X: a

...

Also, what's the point in not using three levels for inputs to INTEG on the HP-34C or HP-15C ? Because none of the stack contents makes it alive to your f(x), matter of fact INTEG and SOLVE fill up all 4 stack levels with the current value of X, for your f(x) to act upon it. So all previous contents are utterly lost. And it goes without saying than upon termination, the initial stack contents before calling SOLVE/INTEG can't be saved at all, not even one level, because your f(x) computation would be likely to use them all, right ?


The above is not entirely correct. The intuitive user procedure that was implemented (and printed on the keyboard face as Syx) is as follows:

lower limit (a)

[ENTER]

upper limit (b)

[INTEG] {user-fcn label}

yielding stack contents of

level   input   output 
t -- a
z -- b
y a integ. error
x b integral

Although the INTEG and SOLVE functions do overwrite a and b on the stack, INTEG restores them upon completion. This is handy for reference, and for proceeding to the next part of a subdivided integral. There's no room to retain the user-function accuracy on the stack.

Quote:
SOLVE places the computed root in X, the previous estimate in Y, and the computed value of your function at the computed root in Z. That's *three* results for you.

OK, there's an example I'd forgotten about. The "extra" values in Y and Z are useful for evaluating the quality of the solution in X. Since SOLVE is an advanced function that overwrites the input stack, it is not an issue to return three arguments, and it needs only two input arguments, in either order. "Two input/output agruments max" was an apparent principle, not a rule, and I'm sure there are other exceptions. I didn't scour the manuals to find them.

Quote:
Have a look at the big footnote on the subject (regarding negative arguments to SCI and ENG), which is at page 247 in the HP-15C's Owner's Handbook, at least it is in my Rev. G manual.

Ah-ha! I looked right over it earlier, but there it is on p. 247 in my Rev. C and Rev. G manuals. "Big footnote"? That's fine print, man! :-)

There may be unconventional applications for the capability of negative-valued indirect arguments to SCI and ENG for purposes that have no application for display format, but I think it's completely counterintuitive and minimally useful, and should have been omitted.

Quote:
If by "the tool" you mean the HP-15C or HP-34C, they certainly had the necessary capabilities, namely a 4-level stack, which could easily accommodate three parameters for INTEG, and/or a prefix-capable keyboard, which could accommodate STO INTEG and RCL INTEG easily.

I did, but I firmly believe in this colloquial expression, which is probably universal:

"Just because you can, doesn't mean you should."

Due to potential pitfalls of implementation that I described, for those models (as well as for the 32S and 32SII), the numerical "accuracy" parameter would not have been a sound idea.

Quote:
And if you're referring to RPL models, with unlimited stack levels, it goes without saying that they could (and should!)...

I agree on that point. Different story for the RPL-based models, the HP-71B Math ROM, and the HP-42S, which had better-suited operating paradigms for that purpose.

Best regards,

-- KS


Edited: 30 Nov 2005, 2:13 a.m.


#47

Hi again, Karl:

Just a few comments, as it seems we agree to (politely) disagree:

Karl posted:

"I see your points, but remain unswayed in my convictions regarding this subject."

    The bad thing about personal convictions is that they aren't necessarily backed up by rational, objective facts and can't be used as arguments in any rational discussion.
"Even though the manual was extensive, once a user familiarized himself, it was seldom needed thereafter -- even for the advanced functions."
    I disagree. I find myself having to reach for it continuously to find out whether a particular matrix operation leaves its results in the result matrix or not and if so, if said result matrix can be one of the operands or not and if so, if the allowed operand is the matrix in X or the matrix in Y or both. Not to mention the effect of such convoluted matrix operations as the ones invocated by Cy,x and Py,x, which perform matrix manipulations intended to help when dealing with complex matrices. Also, STO and RCL by row/col are usually difficult to remember, etc, etc.

    Trust me on this, Karl, I can issue 15 questions about HP-15C operations right now, and you'd need to have a look at the manual to answer most of them, despite the 15C's alleged "intuitiveness".

"The rare items that are not very intuitive on the HP-15C are the ones that get the most questions from users in this Forum, i.e.: "How do I get out of
Complex Mode?" [g CF 8]; "How do I change the comma to a decimal point?" [ON/.]"

    Nonsense. These are the kind of questions that absolutely newbies to the HP-15C (or Voyagers in general) would ask. The real difficult questions never arise because these newbies can never think of them in the first place, and expert users actually do RTFM instead of asking them online.
"Now, STO Syx ("INTEG") would be, what -- "Store the calculated integral"?"
    I'm more in favor of placing the uncertainty in the stack, not to specify it with a STO INTEG command. I only mentioned this as a refutation of your statement that there were no free locations in the keyboard for such a function, remember ? You were wrong, right ?

    Nevertheless, an hypothetic STO INTEG command would be much more intuitive than, say, Cy,x or Py,x applied to matrices, to name a few non-intuitive matrix key sequences.

"Ah-ha! I looked right over it earlier, but there it is on p. 247 in my Rev. C and Rev. G manuals. "Big footnote"? That's fine print, man! :-) "
    Who's talking about font size ? A five line footnote is a big footnote by every standard.
"There may be unconventional applications for the capability of negative-valued indirect arguments to SCI and ENG for purposes that have no
application for display format, but I think it's completely counterintuitive and minimally useful, and should have been omitted."

    But they're there, and it adds a good measure of counterintuitiveness, unexpected side effects, and sheer "WTF?" to the whole business of specifying uncertainty by setting display modes. I'm sorry if this spoils a little your point but that's the way it is implemented and there's no point trying to dismiss reality when it doesn't fit with one's convictions.
That said, it's been an interesting thread and I appreciate your point of view, of course.

Thanks and best regards from V.


#48

... but I have sound counterpoints/rebuttals for every one of your points in the last post! :-)

OK, as you wish -- no further discussion about it. I still support H-P's approach for input-function tolerance to INTEG in the 34C/15C/41C Advantage as a practical and adequate approach, given the limitations of those devices.

Best regards from Karl

#49

With Classic RPN's four-register stack, an extra parameter for the
integration command would be a problem, so checking the display
mode to set an error tolerance has some excuse.

But what about the RPL models? They can have a very deep stack, so
it seeems to me that there should be no real problem with making
the error tolerance a parameter for the integration command. A
complication here is that with RPL, integration can be either
numeric or symbolic, and with symbolic integration, no error
tolerance is needed. But any RPL command needs the same number of
arguments, regardless of how it's being used, and the development
team chose to use just one command for both numeric and symbolic
integration.

I looked in Bill Wickes's Insights books for
some clues about why things are the way they are.

In the 28 series, for symbolic integration,the arguments are as
follows:

3:             integrand (an algebraic)
2: variable name (a name)
1 degree (a real integer)
But for numerical integration, the arguments are like this:
3:             integrand (a procedure, algebraic or program)
2: { name lower upper} (a list)
1: accuracy factor (a real number)
Or as a variation of numeric integration:
3:             integrand (a program)
2: {lower upper} (a list)
1: accuracy factor (a real number)
In that last example, instead of looking for a name for the
variable of integration, the values are kept in stack level 1, so
it's faster.

Note that integration in the 28 series always takes three
arguments, although the arguments vary, depending on what you're
doing.

For the 48 (and 49) series, this was changed. The arguments are
always the same, and the integrand can only be an algebraic; it
can't be a program, and instead of supplying the accuracy factor
as a parameter in the arguments, the current display mode is
checked. The arguments are as follows:

4:                 lower (a real number)
3: upper (a real number)
2: integrand (an algebraic)
1: variable name (a name)
Personally, I prefer the way it's done on the 28 series.

Regarding the error tolerance, quoting Bill Wickes from
HP 48 Insights Part II: Problem-Solving
Resources
:

Quote:
The numerical evaluation of [integral symbol] produces a series of
increasingly accurate estimates of the integral, derived from
sampling intervals that are halved at each iteration. The process
terminates when three successive iterations differ by an amount
less than an error tolerance that you specify, or after a maximum
of sixteen iterations have produced no apparent convergance (at
this point the integrand has been evaluated 65535 times). The
error tolerance is determined by the current real number display
setting: n FIX (or SCI or ENG) specifies an error
tolerance of [Epsilon] = 10-n. This
in turn relates to the probable error in the numerical integral:

error [less than or equal] [Epsilon][integral]|f(x)|dx.


Quote:
If the integration results have not converged after sixteen
iterations, the number -1 is stored in IERR. The value returned to
the stack is the last estimation of the integral.

I presume that when numerical integration is successful, the value
stored in IERR is based on how the estimates of the integral are
converging, but I don't know the algorithm used for this.

I think it worth noting that having three successive iterations
differ by an amount less than the specified error tolerance
doesn't guarantee that the last estimate differs from the actual
value of the integral by less than the specified error tolerance.

By the way, scans of the Insights books, as
well as HP 41/HP 48 Transitions are available
on the latest MoHPC CD set / DVD. For anyone who wants a deeper look
into why RPL is the way it is and how to use it effectively, I
highly recommend the Insights books. Having
never used a 41 series, I haven't read
Transitions, but presumably it's written for
those accustomed to Classic RPN who want to learn RPL.

Regards,
James


#50

James --

Thank you for the informative response regarding integration tolerance, and for clearing some things up about numerical integration on RPL-based machines. Your response would have been helpful for my first RPL "challenge" in late 2003, when I didn't know how to integrate a function defined as a program (rather than as an expression) on an HP-48G. I got only one uninformative reply.

You say that it is possible on the HP-28C/S, but not on the 48/49.

Quote:
With Classic RPN's four-register stack, an extra parameter for the integration command would be a problem, so checking the display mode to set an error tolerance has some excuse.

There is room on the classic 4-level RPN stack for an input accuracy argument, and it wouldn't matter even if all four stack levels were filled with input variables. This is because INTEG fills the stack with the present value of the input variable to the user-defined program, which might use all four stack levels.

However, I believe that to include the function accuracy on the RPN stack as an input would have been unsound, due to potential pitfalls to the user. (Valentin disagreed with me, but I am unconvinced. You may read the posts in this same thread for this discussion.)

The fundamental difference between the RPL stack and the RPN stack (besides depth) is that the RPL stack objects have specific object types, which the calc can check. Thus, if the user omits the function tolerance on the lowest HP-28 stack level as input to INTEG, the calc knows to return an error (or alternatively to use a default tolerance). INTEG on the RPN-based 34/15/41/32/33 models, however, take only floating-point inputs from the stack. If the tolerance is not placed on the stack, unrelated stack contents would be used as input arguments, and possibly the stack contents would be used for the wrong input variables.

I also prefer the HP-28 stack-argument syntax for INTEG; menus are better to use interactively on the 48/49 models.

-- KS


#51

Hi Karl,

Quote:
Thank you for the informative response regarding integration
tolerance, and for clearing some things up about numerical
integration on RPL-based machines.

You're welcome.
Quote:
Your response would have been helpful for my
first
RPL "challenge"
in late 2003, when I didn't know how to
integrate a function defined as a program (rather than as an
expression) on an HP-48G. I got only one uninformative reply.

I guess that I thought that Veli-Pekka's response was informative
and helpful; he was referring to "user-defined functions". Or
maybe I didn't notice your question, or maybe I noticed but didn't
find the time to reply.
Quote:
You say that it is possible on the HP-28C/S, but not on the 48/49.

Unfortunately (in my opinion), that seems to be the case, at least
for numerical integration. The 49 series adds a few more commands
for integration, but I think for symbolic integration only.
Although the integrand can't be a program on the 48/49, this can
be worked around. Let me review some things about UserRPL.

Things that an RPL model can do can be called "operations".
Examples of operations include disabling last stack saves, the
ROOT command, and the SIN function. Disabling last stack saves can
be done from menu 69 (so it is an operation), but can't be done
from within a UserRPL program, so it's not a "command" (or a
"function"). ROOT can be used (with postfix notation) in a
program, so it's a "command" (as well as an "operation"), but it
can't be used in an algebraic object, so it's not a "function".
SIN can be used in an algebraic with the prefix syntax SIN(X), so
it's a "function", and it can also be used in a program using the
postfix syntax X SIN , so it's also a "command" (as well as an
"operation"). All functions can also be used as postfix commands,
although the documentation calls them functions. Some functions
require the prefix syntax, such as SIN(X), when used in an
algebraic, others require an infix notation, such as A+B, when
used in an algebraic.

But what to do when an algebraic is wanted, or even required (such
as for the integrand argument for integration), but a command that
we'd like to use in the algebraic isn't a function? We can work
around the limitation by making our own UDF (user-defined
function), that we can use with prefix syntax, such as F(x,y,z),
within an algebraic, or for that matter, with postfix syntax such
as x y z F.

When used as a postfix command, a UDF takes it's arguments from
the stack and binds the values as named local variables that can
be used in the "defining procedure", which is either an algebraic
or a program. When used within an algebraic, a UDF take it's
arguments from a comma-separated (or period-separated, if the
"fraction mark" is a comma) or semicolon-separated, parenthetical
argument list instead of from the stack.

The following assumes that the "fraction mark" is a period.

Suppose that I want the (3-dimensional) distance between points
[x1 y1 z1] and [x2 y2 z2]. I can write a UDF using an algebraic
procedure as follows:

%%HP: T(3)A(D)F(.);
\<<
\-> x1 y1 z1 x2 y2 z2
'\v/(SQ(x2-x1)+SQ(y2-y1)+SQ(z2-z1))'
\>>
'F3DISTA' STO
Now, either 'F3DISTA(0,0,0,3,4,12)' EVAL

or the sequence 0 0 0 3 4 12 F3DISTA

returns 13.

I could do much the same using a program for the defining
procedure:

%%HP: T(3)A(D)F(.);
\<<
\-> x1 y1 z1 x2 y2 z2
\<<
x2 x1 - SQ y2 y1 - SQ + z2 z1 - SQ + \v/
\>>
\>>
'F3DISTP' STO
Suppose that I want to sum the cubes of the numbers within a
range. I can write a UDF as follows:
%%HP: T(3)A(D)F(.);
\<<
\-> l h
\<<
0.
l h
FOR n
n 3. ^ +
NEXT
\>>
\>>
'FSCUBES' STO
Now 'FSCUBES(2,5)' EVAL returns 224. Of course the sequence 2 5
FSCUBES also returns 224.

Here's a UDF that takes the square root of a number less than 1,
and squares a number equal to or greater than 1, although I don't
know why anyone would want to do that.

%%HP: T(3)A(D)F(.);
\<<
\-> x
\<<
x
DUP 1.
IF
<
THEN
\v/
ELSE
SQ
END
\>>
\>>
'FSQSQR' STO
Now 'FSQSQR(.25)' EVAL returns .5, and 'FSQSQR(5)' EVAL returns 25.

ROOT seem to be a particularly difficult case. The arguments for
ROOT can be any of the following:

3: procedure (program or algebraic)
2: global name
1: guess or guesses. (1 number or a list of 1, 2, or 3 numbers)
At first glance, it looks as if we could simply define an
function, named say, 'FROOT', that we could use in an algebraic
using the syntax FROOT(procedure,global name,guess). The first
complication is that a prefix function argument list can't include
a program, so the first argument will have to be an algebraic (we
may have to make another user-defined function for the first
argument). The next is that a prefix function argument list can't
include a quoted name, so we have to be sure that the name doesn't
already exist (otherwise the contents instead of the name of the
variable would be used). Finally, the last argument can be 1, 2,
or 3 guesses, and list delimiters can't be used within a prefix
function argument list.

To work around that last argument, I'll make three separate UDFs:

%%HP: T(3)A(D)F(.);
\<<
\-> p n g
\<<
p n g
ROOT
\>>
\>>
'F1ROOT' STO

%%HP: T(3)A(D)F(.);
\<<
\-> p n g1 g2
\<<
p n
g1 g2 2. \->LIST
ROOT
\>>
\>>
'F2ROOT' STO

%%HP: T(3)A(D)F(.);
\<<
\-> p n g1 g2 g3
\<<
p n
g1 g2 g3 3. \->LIST
ROOT
\>>
\>>
'F3ROOT' STO

Now, suppose that I want to find the root near 3 of the function
SIN(X), in RAD mode. I can do any of the following:
RAD HOME 'X' PURGE 'F1ROOT(SIN(X),X,3)' EVAL

RAD HOME 'X' PURGE 'F2ROOT(SIN(X),X,3,4)' EVAL

RAD HOME 'X' PURGE 'F3ROOT(SIN(X),X,2,3,4)' EVAL

Any of the above will return 3.14159265359. Of course, if the
calculator is already in Radians mode, then RAD can be omitted,
and if I know that 'X' won't be found on the current path, I can
omit the HOME 'X' PURGE part of the sequence. Note that a new 'X'
will be stored in the home directory.

In the above, I stored the UDFs as global variables, but note that
I can also store them as local variables. For example:

%%HP: T(3)A(D)F(.);
\<<
RCLF PATH
RAD HOME
'X' PURGE
\<<
\-> p n g1 g2 g3
\<<
p n
g1 g2 g3 3. \->LIST
ROOT
\>>
\>>
\-> f3root
\<<
'f3root(SIN(X),X,2,3,4)' EVAL
\>>
EVAL
STOF
\>>
'F3LROOT' STO
Here, the UDF is f3root, and F3LROOT will return 3.14159265359,
and overwrite any variable 'X' in the home directory. It also
restores the original directory and angular unit mode.

In the 49 series, at least with recent ROMs, the PUSH and POP
commands can be used to save and restore the current flags and
directory.

Of course, I don't have to store all of the arguments to a command
in the UDF as local variables, although at least one argument has
to be a local variable for the program to be a UDF. For example:

%%HP: T(3)A(D)F(.);
\<<
\-> g
\<<
'SIN(X)' 'X' g
ROOT
\>>
\>>
'F1SINROOT' STO
Now, 'F1SINROOT(3)' EVAL returns 3.14159265359,

'F1SINROOT(6)' EVAL returns 6.28318530718,

'F1SINROOT(9)' EVAL returns 9.42477796077,

and so on. All of these overwrite any variable 'X' in the current
directory. Since I can quote X in the program, the 'X' PURGE isn't
needed.

I hope that helps.

I wrote above that disabling last stack saves isn't a command, but
on the 49 series, the KEYEVAL command offers a way to make many
operations programmable. Exceptions are operations that require
holding down the ON key while pressing another key. I've sometimes
wished for KEYEVAL on the 28 and 48 series.

Of course, SysRPL makes possible many things, and SysRPL entry
points can be called from UserRPL with the commands SYSEVAL,
LIBEVAL, and FLASHEVAL. But use these with care, as it's easy to
cause a memory clear by improper use.

Quote:
There is room on the classic 4-level RPN stack for an
input accuracy argument, and it wouldn't matter even if all four
stack levels were filled with input variables. This is because
INTEG fills the stack with the present value of the input variable
to the user-defined program, which might use all four stack
levels.

However, I believe that to include the function accuracy on the
RPN stack as an input would have been unsound, due to potential
pitfalls to the user. (Valentin disagreed with me, but I am
unconvinced. You may read the posts in this same thread for this
discussion.)


Okay; my 12C and 16C don't seem to include integration and I
really don't care to read all of the documentation for other
Classic RPN models, so I shouldn't've even commented on that
issue.
Quote:
The fundamental difference between the RPL stack and the RPN stack
(besides depth) is that the RPL stack objects have specific object
types, which the calc can check.

And in fact, every built-in UserRPL command that requires any
argument checks that the number of arguments required is
available, then checks the types of these arguments and proceeds
appropriately. If arguments that it can use aren't available, then
it will error out.
Quote:
Thus, if the user omits the function tolerance on the lowest HP-28
stack level as input to INTEG, the calc knows to return an error
(or alternatively to use a default tolerance). INTEG on the
RPN-based 34/15/41/32/33 models, however, take only floating-point
inputs from the stack. If the tolerance is not placed on the
stack, unrelated stack contents would be used as input arguments,
and possibly the stack contents would be used for the wrong input
variables.

Which brings up another difference. With Classic RPN, all of the
stack registers are always available; even after clearing, they're
zero-filled. On an RPL model, after clearing the stack, the stack
levels simply don't exist, and attempting to use a non-existent
level will cause an error.

Another difference is that typically, more stack levels are
displayed on the RPL models. As far as I know, the Classic RPN
models display at most two registers, so what's in the other
registers isn't so obvious.

Regards,
James

Edited: 18 Dec 2005, 7:54 a.m.


#52

Hi, James --

Thank you for making the effort to post such a detailed response. I'll quite likely try a few of those ideas, for the sake of experimentation...

Regards,

-- KS


Possibly Related Threads…
Thread Author Replies Views Last Post
  Integration question and "RPN" mode comment Craig Thomas 16 5,967 12-05-2013, 02:32 AM
Last Post: Nick_S
  HP Prime numerical restrictions? Alasdair McAndrew 4 1,916 11-16-2013, 05:32 PM
Last Post: Alasdair McAndrew
  HP Prime numerical precision in CAS and HOME Javier Goizueta 5 2,519 11-16-2013, 03:51 AM
Last Post: Paul Dale
  WP34s integration trapped in infinite loop Bernd Grubert 25 7,185 10-17-2013, 08:50 AM
Last Post: Dieter
  HP Prime integration Richard Berler 1 1,236 10-06-2013, 10:52 PM
Last Post: Helge Gabert
  How much accuracy does one actually need? Matt Agajanian 23 6,019 08-26-2013, 12:46 PM
Last Post: Kimberly Thompson
  [HP-Prime] AMBIGUITY between Numerical Calculation (HOME) and Numerical/Symbolic Calculation (CAS mode) CompSystems 2 1,492 08-18-2013, 07:06 PM
Last Post: CompSystems
  OT: My brain is failing me again. Help with numerical / mechanical problem required. Harald 4 1,865 07-01-2013, 10:31 AM
Last Post: Harald
  integration on 39gII emulator Wes Loewer 29 7,395 06-07-2013, 05:58 PM
Last Post: Chris Smith
  WP-34S Integration Richard Berler 15 3,902 03-08-2013, 02:29 AM
Last Post: Walter B

Forum Jump: