don't estimate the errors - know them! or rather their upper bounds.

i suggest the field of validated computing, which i have been looking at for

some time. This is the use of interval arithmetic to bound your computations.

http://en.wikipedia.org/wiki/Interval_arithmetic

you have to define a new number class with the usual operators, which holds an

interval [Inf, Sup]. Where your true real value is somewhere Inf <= x <=

Sup. The operators maintain the error window whenever you perform

arithmetic. If you have, say a Taylor series of arctan, you have to also

expand the window by the max error of your truncation and so on.

some years ago i successfully built a calculator around this idea.

(windows exe) www.voidware.com/exact/exact.exe

eg

exact -acc 2000

pi()

increasing precision to 2012 =

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778053217122680661300192787661119590921642019893809525720106548586327886593615338182796823030195203530185296899577362259941389124972177528347913151557485724245415069595082953311686172785588907509838175463746493931925506040092770167113900984882401285836160356370766010471018194295559619894676783744944825537977472684710404753464620804668425906949129331367702898915210475216205696602405803815019351125338243003558764024749647326391419927260426992279678235478163600934172164121992458631503028618297455570674983850549458858692699569092721079750930295532116534498720275596023648066549911988183479775356636980742654252786255181841757467289097777279380008164706001614524919217321721477235014144197356854816136115735255213347574184946843852332390739414333454776241686251898356948556209921922218427255025425688767179049460165346680498862723279178608578438382796797668145410095388378636095068006422512520511739298489608412848862694560424196528502221066118630674427862203919494504712371378696095636437191728746776465757396241389086583264599581339047802759

which automatically increases precision until the error bound is met.

Back then i was planning to port this onto physical calculators like the

hp50g, but i decided i didn't need silly accuracy but only enough.

However, i've always been tantalised by the idea of applying the same approach

to solving once and for all the calculator problems of solve and

integrate. The problem of bounding the error with classical algorithms fails

because they are all based on sampling.

A rigorous numerical approach works for numerical quadrature by dynamically

creating a taylor expansion of your function (numerically) and essentially

integrating term by term, whilst keeping check on the overall error

bounds. it's like mixing numeric with symbolic - kind of semi-symbolic where

you only have to manipulate a series.

Anyhow, progress has been made in this area since i last looked. just

recently, i've been reading "Validated Numerics by Warwick Tucker." This is

quite a good introduction to the topic.

*Edited: 15 Aug 2012, 8:28 p.m. *