# HP Forums

Full Version: Estimating Accumulated Rounding Errors
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

Hello all,

I am working on yet another Pi program. In the past to eliminate accumulated rounding errors I just added extra guard digits. However I'd like to be able to precompute exactly how many I need. I can with great certainty compute the number and type of operations, and without much effort I can produce a table at regular intervals of the number of required guard digits. But, I'd like to explore a pure mathematical approach.

I have a few ideas, but I'd also like your thoughts as well.

Thanks.

Edited: 15 Aug 2012, 12:45 a.m.

- Pauli

You might find error discussion in the appendix of the HP-15C Advanced Functions Handbook of some use. (And certainly of more use than my thoughts on the topic!)

don't estimate the errors - know them! or rather their upper bounds.

i suggest the field of validated computing, which i have been looking at for
some time. This is the use of interval arithmetic to bound your computations.

http://en.wikipedia.org/wiki/Interval_arithmetic

you have to define a new number class with the usual operators, which holds an
interval [Inf, Sup]. Where your true real value is somewhere Inf <= x <=
Sup. The operators maintain the error window whenever you perform
arithmetic. If you have, say a Taylor series of arctan, you have to also
expand the window by the max error of your truncation and so on.

some years ago i successfully built a calculator around this idea.
(windows exe) www.voidware.com/exact/exact.exe

eg

```exact -acc 2000
pi()
increasing precision to 2012 =
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778053217122680661300192787661119590921642019893809525720106548586327886593615338182796823030195203530185296899577362259941389124972177528347913151557485724245415069595082953311686172785588907509838175463746493931925506040092770167113900984882401285836160356370766010471018194295559619894676783744944825537977472684710404753464620804668425906949129331367702898915210475216205696602405803815019351125338243003558764024749647326391419927260426992279678235478163600934172164121992458631503028618297455570674983850549458858692699569092721079750930295532116534498720275596023648066549911988183479775356636980742654252786255181841757467289097777279380008164706001614524919217321721477235014144197356854816136115735255213347574184946843852332390739414333454776241686251898356948556209921922218427255025425688767179049460165346680498862723279178608578438382796797668145410095388378636095068006422512520511739298489608412848862694560424196528502221066118630674427862203919494504712371378696095636437191728746776465757396241389086583264599581339047802759
```

which automatically increases precision until the error bound is met.

Back then i was planning to port this onto physical calculators like the
hp50g, but i decided i didn't need silly accuracy but only enough.

However, i've always been tantalised by the idea of applying the same approach
to solving once and for all the calculator problems of solve and
integrate. The problem of bounding the error with classical algorithms fails
because they are all based on sampling.

A rigorous numerical approach works for numerical quadrature by dynamically
creating a taylor expansion of your function (numerically) and essentially
integrating term by term, whilst keeping check on the overall error
bounds. it's like mixing numeric with symbolic - kind of semi-symbolic where
you only have to manipulate a series.

Anyhow, progress has been made in this area since i last looked. just
recently, i've been reading "Validated Numerics by Warwick Tucker." This is
quite a good introduction to the topic.

Edited: 15 Aug 2012, 8:28 p.m.

weird, everyone else' posts wrap except mine.

The editor is too polite to break you pi :)

I too have thought about interval arithmetic in a calculator :-)

I was specifically thinking of the mpfi library which is based on the guarantee correctly rounded MPFR library. They are not decimal arithmetic however.

- Pauli

Hi Paul,

Regarding interval arithmetic, i've been experimenting with C-XSC. but this is mainly for researching the subject.

For calculators, i'm thinking of extending my own number library. There's a neat way to encode an interval without much overhead:

intervals have a "mid" (m) and a "radius" (r) so [a,b] is the same as [m-r,m+r]. so you store the mid value in your float format then encode the radius as the last digit of the mantissa. Naturally, you have to know in your implementation not to use this last digit as part of the mid value. but the clever idea is you don't really need much accuracy in the radius, which now has a limited mantissa (eg 1 digit) and its magnitude correctly lies just below that of the mid (since otherwise you're wasting mid mantissa that overlaps the error anyhow).

Actually, just 1 digit of error term is a bit poor, the consequence is wider error bounds and unnecessary extra calculation. However, you may recall that my decimal library works with "digits" of 4 decimal digits, ie base 10,000. I would dedicate one of these such digits and benefit from a 4 digit (decimal) error mantissa. I'm thinking this could be an unexpected bonus.

Regarding multiple precision, there are two possibilities here; either make my library extended precision, which will cost me another "digit" for the length and therefore take up more space and time. or, stay with my 25 digits of working precision and just allow the error to swallow what it takes and hope to deliver enough validated payload for people's actual need (eg 10 say). I think if it could deliver 10 validated digits of numerical quadrature, that would be a big result.

i'm also bearing in mind that, although calculators are much more powerful now than ever, it can't be completely crazy.

-- hugh.

If you enclose something in [pre] .... [pre/] then the text will remain untouched, no automatic line break is done. You need to do the line feeds manually, as I've done below. As I didn't want to edit your post, you may replace your one long line with a copy of my reformatted lines if you wish, in order to prevent us from having always long lines throughout the entire thread.

HTH
Günter

```exact -acc 2000
pi()
increasing precision to 2012 =
3.141592653589793238462643383279502884197169399375105820974944592307
81640628620899862803482534211706798214808651328230664709384460955058
22317253594081284811174502841027019385211055596446229489549303819644
28810975665933446128475648233786783165271201909145648566923460348610
45432664821339360726024914127372458700660631558817488152092096282925
40917153643678925903600113305305488204665213841469519415116094330572
70365759591953092186117381932611793105118548074462379962749567351885
75272489122793818301194912983367336244065664308602139494639522473719
07021798609437027705392171762931767523846748184676694051320005681271
45263560827785771342757789609173637178721468440901224953430146549585
37105079227968925892354201995611212902196086403441815981362977477130
99605187072113499999983729780499510597317328160963185950244594553469
08302642522308253344685035261931188171010003137838752886587533208381
42061717766914730359825349042875546873115956286388235378759375195778
18577805321712268066130019278766111959092164201989380952572010654858
63278865936153381827968230301952035301852968995773622599413891249721
77528347913151557485724245415069595082953311686172785588907509838175
46374649393192550604009277016711390098488240128583616035637076601047
10181942955596198946767837449448255379774726847104047534646208046684
25906949129331367702898915210475216205696602405803815019351125338243
00355876402474964732639141992726042699227967823547816360093417216412
19924586315030286182974555706749838505494588586926995690927210797509
30295532116534498720275596023648066549911988183479775356636980742654
25278625518184175746728909777727938000816470600161452491921732172147
72350141441973568548161361157352552133475741849468438523323907394143
33454776241686251898356948556209921922218427255025425688767179049460
16534668049886272327917860857843838279679766814541009538837863609506
80064225125205117392984896084128488626945604241965285022210661186306
74427862203919494504712371378696095636437191728746776465757396241389
086583264599581339047802759
```

Hugh, thanks. I think this is what I was looking for.

Pauli, thanks. I think I have about 1/3-1/2 of those books.

A couple of thoughts/questions.

Wouldn't your proposed scheme for m, r using the lowest digit block limit the width of the intervals? I'd be rather surprised if intervals didn't get larger than any finite limit on the width. i.e. high accuracy isn't required in the width, but large range is.

It also seems like there might be rounding issues determining the bounds of the interval -- perhaps depending on how it were implemented. This could increase the effective width unnecessarily.

Still interval arithmetic is an interesting idea and one I'd like to have included in the 34S :-)

Pauli

thanks.

i did [pre] .. [/pre]

always making this mistake...

You might be right for certain cases. This is something i have to look into. For example, sometimes there's need for things like [x,Inf] or [-Inf,x] to represent upper or lower possible values. What i don't know is whether, such cases come out in normal calculation AND lead on to useful results. If it means your result has gone wide or these are simply means to encode an exception or divide by zero, then perhaps we don't really need them.

my working idea is that the range, when it encodes uncertainty, is within the magnitude of the value. Otherwise your value is useless. so when x = 0.5 with uncertainty +/- 10, the 0.5 isnt really telling us anything.

if this idea is workable, i can sneak the uncertainty into the number representation with fairly little overhead. I admit it's not going to encode ranges greater than the number itself. is that a problem; i don't know yet.

on the subject of rounding errors, the current fashion for implementation is to use a pair of IEEE doubles for your interval [a,b], and when `a' is calculated you set the floating point mode to round DOWN and when `b' is calculated, you set the float mode to round UP.

you have to mess about with setting the float mode (which is usually not very portable), but it means you can use your built-in hardware floating point, which is usually present. This is quite a neat idea and you're sure to maintain a hard bound.

incidentally, to take an example; what they do when working on binary floats (eg almost always) with numbers like 0.1 is to input "[0.1,0.1]" so they get the greatest representable float <= 0.1 and the least representable float >= 0.1. in other words even some "constants" wind up being a non-zero interval.

-- hugh.