High precision calculator in Windows 7


If you need a calculator with lots of digits, try out the calc in Windows 7 and set it to scientific mode.

Pi = 3.1415926535897932384626433832795
3248 n! = 1,9736342530860425312047080034031e+9997

Not sure if we do need it :)


$ bc -l
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.

If you must have RPN, then install calc:

$ calc -d1000 pi | fold -w68


Egan, this is cool! I wasn't aware of the existence of bc. I am assuming that a() is the arctangent function. Where does one find a listing of functions for bc?


Cancel that. Google is my friend:

bc manual


man bc
Also works.

One feature of bc that I like is ibase/obase. It makes work with bases very easy.

Edited: 21 Aug 2009, 10:43 a.m.


bc has a very limited repertoire of built-in math functions. One needs to know a few trig and inverse trig identities to compute arcsine, arccosine, or even tangent. The scripting language seems really straightforward and I don't think it is too hard to build routines to compute more complicated functions. I may work on something to compute Gamma, a longtime pet project of mine :)


bc has a very limited repertoire of built-in math functions

This is true if you invoke 'bc -l'. dc on the other hand has no such set of functions (although with effort one could write them).

bc -l math functions:

s (x)    The sine of x, x is in radians.
c (x) The cosine of x, x is in radians.
a (x) The arctangent of x, arctangent returns radians.
l (x) The natural logarithm of x.
e (x) The exponential function of raising e to the value x.
j (n,x) The bessel function of integer order n of x.
I may work on something to compute Gamma, a longtime pet project of mine

I'd like to see that. Especially for any scale, e.g. gamma(pi) to 1000 digits :-)


The GMPR arbitrary-precision floating point library (based on the GMP arbitrary precision integer library) includes the gamma function, as well as ln(gamma(x)) and ln(abs(gamma(x))) functions.


I'd like to see that. Especially for any scale, e.g. gamma(pi) to 1000 digits :-)

Now THAT'S a challenge.

The GMPR library that Eric referred me to seems to do it by using the Stirling approximation and predetermining how many terms to go out to given the desired precision and the argument. It then calculates the Bernoulli numbers on the fly.

I have a little routine that does it using Stephen Moshier's qfloat library, but I used a fixed number of terms in the Stirling series for larger arguments, do Pochammer shifting for smaller ones, and I precompute the Bernoulli coefficients and put them in a an array. A qfloat is about 104 or 105 digits, and I get about 102 or 103 of those right. But that is fixed multiple precision, not arbitrary precision.

May look at the Spouge method--no quite as good as Lanczos, but the coefficients are much easier to compute and one can predetermine the number of terms required due to a fairly straightforward error bound estimate.



Here is a simple approach that just came to me. This is NOT how Gamma is typically computed in software because it is slower than than the Stirling series and related approaches. However, it is easier to code, and does not require any advance fiddling to figure out how many terms one needs for a desired number of digits.

1. Take your desired argument a.

2. Compute the left sided incomplete gamma by its series for (a, a+1)

3. Compute the right sided incomplete gamma by continued fraction for (a, a+1)

4. Add 'em together.

The only thing one needs to have the code do is derive on the fly a desire convergence epsilon give the desired precision (or scale as it is called in bc).

Code for the incomplete gamma functions is given in Numerical Recipes, and indeed this is what I have ported (indeed, shameless plagiarized) in my various implementations for the incomplete gamma functions and their specialized offspring (like the cumulative normal distribution, error functions, and cumulative chi-square distributions). Some time ago when I was playing around with Stephen Moshier's high precision qfloat library (about 105 decimal digits), this was my adaptation of the Numerical Recipes code:

#include <qfloat.h>

#define ITMAX 1000000

const qfloat fpmin = (qfloat)"1.0e-9800";
const qfloat on‌e = (qfloat)"1.0";
const qfloat two = one + one;
const qfloat qeps = (qfloat)"3e-105";
const qfloat zero = (qfloat)"0.0";
// I have tried to converge to a tinier epsilon but some inputs make the routine belch!

qfloat gser(qfloat a, qfloat x)
long n;
qfloat sum,del,ap;

for (n=1;n<=ITMAX;n++) {
del *= x/ap;
sum += del;
if (fabs(del)< qeps*fabs(sum)) return exp(-x+a*log(x))*sum;
puts("a too large, ITMAX too small in routine gser");
return zero;

qfloat gcf(qfloat a, qfloat x)
long i;
qfloat an,b,c,d,del,h;

for (i=1;i<=ITMAX;i++) {
an = -i*(i-a);
b += two;
if (d<fpmin) d=fpmin;
if (c<fpmin) c=fpmin;
h *= del;
if (fabs(del-one) < qeps) return exp(-x+a*log(x))*h;
puts("a too large, ITMAX too small in gcf");
return zero;

Really should be no trouble to port something this straight forward to a bc script.

Another little challenge!



My preliminary efforts go sluggishly.

In the gcf half of the code, I am finding I need to precompute an epsilon that is several order of magnitude larger than what I would expect for a given scale in order to get convergence. I also am trying to figure out that if I compute a result to a given increased scale internally how I can output it so it has a lower scale and the extra guard digits I have kept are not displayed. As it stands now, a computed result shows all the extra digits, even if I want to reformat the output to show my final result with a different, somewhat slightly smaller scale.

For example, say my desired scale for display is 100. If my code temporarily increases this internally to 110, the output has a scale of 110, even if I set the scale back to the original 100 before returning the result. There must be some way in bc to truncate a result from a higher scale to a lower scale....

Also, the approach is slow for high precisions, as I thought it would be. There is a reason why Stirling or other approaches derived from or inspired by it are used to this day, and my little idea is not!



I may get more luck implementing [link:http://en.literateprograms.org/Gamma_function_with_Spouge's_formula_(Mathematica)]this[/link].

Spouge's formula is a good option since one can easily set a conservative error bound a priori. For 1000 digits (and a relative precision of 1e-1000), about 1260 terms in the series need to be computed. Thankfully, this can be done recursively--each coefficient is a multiplicative factor of the previous one.

Will give it a go and see what i come up with.


Edited: 24 Aug 2009, 11:18 p.m.


Interesting, this is exactly the approximation for (real and complex) Gamma I'm using in the 20b scientific firmware :-)

- Pauli


The Spouge series is quite simple to implement and I think easy to implement in arbitrary precision, but I think it can be quite slow compared to a well crafted and properly terminated Stirling series, which to this day seems to be the first choice in the software libraries. That said, I have used Nosy on the HP49G+ and Gamma there looks like either a polynomial or CF approximation of sorts.

I have coded the Spouge series in Mathematica just as I would in C++ or bc, but the bc port is giving me headaches since bc lacks a built in exponentiation function for nonintegers, and the routine I have doesn't like it when the number to be raised to a power is 0 (which it often is with Spouge). I am not good at error trapping either!

That said, bc is a lot of fun BECAUSE it is so stripped down. A challenge. Like trying to program the HP-65 or 33C with so few steps available :)



BTW, for anyone who speaks Mathematica, here is the routine I am trying to port to BC:

gamma[zin_, n_] := 
{a, c0, ci, sum, gammapos, i, z, zz, out},
Block[{$MaxExtraPrecision = [Infinity]},
z = Rationalize[Abs[zin]];
a = Ceiling[n Log[10]/Log[2 Pi]];
c0 = Exp[-a]*Sqrt[2 Pi];
ci = Sqrt[a - 1]/Exp[1];
sum = c0 + ci/z;
For[ i = 2, i < a, i++,
ci *= (a - i)^(i - 1/2)*(a - i + 1)^(3/2 - i)/(1 - i)/Exp[1];
sum += ci/(z - 1 + i)
N[(z - 1 + a)^(z - 1/2) Exp[-z + 1] sum, n]

I coded it more like C and relied little on Mathematica's built in capabilities (like summing over an index) to make porting easier later. It is several times slower than the built-in Gamma[] at a given argument and precision.


Well, having just ported my little Mathematica routine to c++ and Moshier's 105-digit qfloat library, I have just discovered the Achilles heal of the Spouge series in fixed precision system, albeit one so extended.

Computing the coefficients on the fly with so many evaluations of exp() and log() seems to be contributing to digit loss, where as with my trusty Sterling routine where I have the Bernoulli numerators and denominators stored in a table (I computed them with Maple and just pasted them over) manages to attain 103 or 104 digit accuracy on average.

I am perplexed as to the issue, Moshier's qfloat exp() and log() overloaded functions are well written and seem to give full 105 digit accuracy.

Must be something fishing in my code that perpetuates losing sometimes as many as twenty digits for some arguments.

Still can't get the thing to work in bc either! No clue here....

In the 20b you may want to just precompute and store the coeffiecients rather than compute on the fly each time. For 15 digits of internal precision, that is between 12 and 19 terms, depended on how conservative your error estimate, etc.

In an arbitrary precision environment like Mathematica, spouge works great, but it is slower than the built in function.



Edited: 26 Aug 2009, 3:49 a.m.


In the 20b you may want to just precompute and store the coeffiecients rather than compute on the fly each time. For 15 digits of internal precision, that is between 12 and 19 terms, depended on how conservative your error estimate, etc.

I do precompute the coefficients. Precision on the stack is 16 digits, although internally I'm computing to 39. Not all functions are evaluated to the full 39 digits though.

- Pauli


See the rambling below.

I don't think that precomputing the coefficients will save you from the evils of digit loss due to subtracting large numbers that are so close together.

For 16 digits of stack precision, you will need ever one of your internal digits for large arguments. But if Pi, log, exp, and your power function internally are full precision, your interim results should be okay.

Just keep in mind that digit loss is due to subtraction loss within the alternating series, not the imprecision of the functions used to compute the terms.




For 16 digits of stack precision, you will need ever one of your internal digits for large arguments. But if Pi, log, exp, and your power function internally are full precision, your interim results should be okay.

Yes, these functions are full precision. I'm using IBM's decNumber library which is a very nice BCD floating point implementation including all the nice extras: infinities, NaNs, sub-normals, packed decimal etc.

However, I might revisit the implementation eventually, this approximation was quick and easy and I [i]knew[/i] it would be accurate enough over the entire domain.

I'm very much aware of the risks of loss of precision having done an amount of numerical analysis over the years.

- Pauli


Have you found (or written) any basic trig and inverse trig functions for use with decNumber?


Yes. The ill fated OpenRPN project had the first version of the code. The 20b repurposing site has later versions. I can email the current code if you PM your email address to me.

- Pauli


Found your brouhaha address and sent the code there.

- Pauli


Found that my complex gamma function wasn't working properly so I switched to a Lanczos approximation instead. Smaller code and likely a bit faster.

- Pauli


I've been very intrigued by this thread. I've been trying to code up a usable arbitrary precision gamma function for a long time.

I've been somewhat successful using the Incomplete Gamma, extending the tail as far as needed until I get convergence following an example by Victor Toth. I didn't, however, think about doing the two halfs with different implementations to get better convergence. I'll have to try that!

Here's my attempt to simplify Spouge for C. It works well if you keep enough guard digits for values less than 10 (like pi). Note that the powers in the inner loop are all integer, with a square root added. The final computation requires non-integer powers, but that could be done with the exp() and log() functions. I hope this helps someone.

I compiled this with my multi-precision package of choice, MAPM. I just define double to be the MAPM extended precision type. I was able to get 100 digits of precision in less than a second. 1000 digits took about 6 minutes on my PC, so that seems too long. I have to define guard digits with 20% more places than my desired output.

I'm still not sure why it takes so many guard digits. Usually I can get away with 8 extra guard digits, but not here.

double gamma(double z, int decplaces) {
int k;
double a, den, sum, gamma;

a = decplaces; // theory says 1.26*decplaces, but this works!
den = 1;
sum = sqrt(2*pi);
for (k=1; k<decplaces; k++) {
if (k & 1) { // odd k, add
sum += sqrt(pow(a-k, k+k-1)) * exp(a-k) / (den * (z + k));
else { // even k, subtract
sum -= sqrt(pow(a-k, k+k-1)) * exp(a-k) / (den * (z + k));
den = den * k;

return sum * pow(a + z, z + 0.5) / exp(z + a);

Edited: 26 Aug 2009, 12:35 p.m.


Since the terms of the series alternate in sign I am assuming digit loss due to subtraction?

That is the only thing I can think of. In my C code, Moshier's qfloat exp, log, and sqrt give full 105 digits almost all the time in the ranges I am interested in. I also modified the qfloat.h header file to include an inline qfloat atn() function, which I use to compute pi to full 105 digit precision with 4*atn(1).

I like your vastly simplified code--looks like you took the time to reduce the coefficient formulae to an even simpler recursion than what I came up with. Why don't we try this: run two sums, sum1 and sum2, the first one for odd k, the second for even, and do only one big subtraction at the end? Also, in diagnostics you can return sum1 and sum2 as well as the final output to see if there is anything about the relative magnitude of the numbers that contribute to digit loss when subtracted?

When I find time I will port your code to a qfloat version and see what I get. I like your version a lot better so far.

The adding together the right and left incomplete gammas is pretty good but for lots of digits can be slow, and I am finding with bc it a big challenge to define an epsilon for the continued fraction half that will provide desired convergence for a given arbitrary precision. But with fixed multiple precision qfloat I am pleased. But the approach seems inefficient, and I can see why most libraries seem to want to stick with trusty Stirling, especially if they are able to efficiently generate the bernoulli numbers on the fly for any given precision. I know that my own qfloat gamma with the bernoulli numerators and denominators in a table does really decently.

Arbitrarily extending the series of the left-sided incomplete gamma works fine on fixed precision calculators, but in arbitrary precision settings I don't know how far out you need to go to get desired precision for a given argument. Also, I think if you go too far "noise" gets introduced and precision degrades.

You may want to look up Glen Pugh, the author of this excellent thesis. He is also approachable, and a couple of years ago was very helpful to me when I had questions about his approach to Lanczos, and he even sent me expanded coefficient and error tables (long ago lost when my inbox was corrupted a while back). Here is his webpage.

The 1.26*decplaces estimate of a really is overkill. The wiki page I referred to above uses it, but admits that this is a very generous over estimate, as a key multiplicative factor that can greatly reduce the relative error by a few orders of magnitude is dropped from Spouge's original estimate so that the inequality is easier to solve for a given desired epsilon.

I don't have so much time to come back to this for a few days, but I would be interested to know what you find. This is a great little programming problem.

FWIW, Free42 uses Hugh Steers BCD20, which implements Lanczos with a table of precomputed coefficients.


PS: A question: I am not familiar with "k & 1" instruction. Is this a simple way to test for oddness or evenness? Also, it looks like your routine actually returns gamma(z+1) = z!, which is how Spouge originally wrote it. My routine rewrites the formulae taking the shifting into account at the beginning. This may overly complicate things in my code.


Steve, I ported your routine to Mathematica, set it up so that the argument was handled not as an exact number but an arbitrary precision decimal number to the desired number of digits. I worked with 100 just for the sake of speed.

Guess what? If we call "sum" the sum of all the positive terms and "sumneg" the sum of all the negative terms, these numbers happen to be almost identical, and with 100 digits of arbitrary precision the first 16 to 20 or so digits often look identical depending on the argument.

For illustration, here is my Mathematica code. Even if you don't speak Mathematica, it is pretty clear:

gamma_steve[zin_, n_] := Module[
{k, den, sum, sumneg, term, z},
z = N[Rationalize[zin], n];
a = n;
den = 1;
sum = Sqrt[2 Pi];
sumneg = 0;
For[k = 1, k < n, k++,
term = Sqrt[(a - k)^(2 k - 1)] Exp[a - k]/(den (z + k));
If[OddQ[k], sum += term, sumneg += term];
den *= k
{sum, sumneg, N[(sum - sumneg) (a + z)^(z + 1/2)/Exp[z + a], n]}

Here is output of sum, sumneg, and the final result for Pi and 100 digits. BTW, your code actually computes Gamma[z+1], hence the shift:

In[102]:= gamma_steve[Pi - 1, 100]


Mathematica has a built in function that estimates the precision of the final result:

In[103]:= Precision[%]
Out[103]= 86.0209

The greater the desired precision, the closer together sum and sumneg are, and the more leading digits get lost to subtraction. Hence the need for about 20% more guard digits.

I am thinking Spouge ain't all that good for this application unless one is on a fast machine and can pay the price for so many more guard digits.

FYI, for those who know Mathematica, if I change the definition of z to z=Rationalize[zin], and pass it to the routine as an exact number, Mathematica uses it as such and carries out the computations exactly before converting it to a 100 digit decimal number at the end, maintaining full precision. The N[] command converts it to an approximate 100 digit number, and a cost is paid for that.

Well, at least we know know we aren't going crazy!

Hope you find this helpful. If you have access to Mathematica please email and I will send the notebooks.



The bigger the argument, the worse the digit loss. For 100! and 200! I am seeing digit loss due to subraction in the order of 30 or 40 percent when we take a = desired precision.


The digit loss caveat is actually covered in the pithy wikipedia entry on the approximation [link:http://en.wikipedia.org/wiki/Spouge's_approximation]here[/link].

Bugger :(


A very nice and helpful analysis!

Spouge's method is the fastest way I've found in my environment to calculate the gamma function to arbitrary precision. Even though I have to keep more guard digits around to do it.

I haven't tackled the problem of evaluating the function for larger arguments, but they could be handled with range reduction.

I'm glad you realized my function was actually calculting the factorial, or Gamma(z+1). I forgot I subtracted 1 before calling it in my main routine.


PS: A question: I am not familiar with "k & 1" instruction. Is this a simple way to test for oddness or evenness?

This is a test for oddness. '&' is a bitwise AND operation.

- Pauli


MPFR gives results that match Wolfram Alpha. Here's the trivial main program:

#include <stdio.h>
#include <stdlib.h>
#include <mpfr.h>

#define DIGITS 1000
#define BITS ((DIGITS * 3322)/1000 + 32)

int main (int argc, char *argv [])
mpfr_t pi;
mpfr_t result;

mpfr_init2 (pi, BITS);
mpfr_init2 (result, BITS);

mpfr_const_pi (pi, GMP_RNDN);

mpfr_gamma (result, pi, GMP_RNDN);

mpfr_out_str (stdout, 10, DIGITS, result, GMP_RNDN);
printf ("\n");

exit (0);


I'd like to see that. Especially for any scale, e.g. gamma(pi) to 1000 digits :-)

Go to Wolfram Alpha and see this.

- Pauli


Go to Wolfram Alpha and see this.

Yes, that is how we'll check the results. For Les, its about the journey, not the destination.

If you must have RPN, then install calc:

Real men use dc.


(For that matter, even bc traditionally uses dc to preform the calculations.)

Edited: 21 Aug 2009, 12:20 p.m.


Both bc and dc have long histories. According to Wikipedia "[dc] is one of the oldest Unix utilities, predating even the invention of the C programming language.." bc was traditionally implemented in dc, simplifying the former's terse syntax. The bc page at Wikipedia references a manual page for bc in the Unix v7 manual set. A POSIX draft standard specified bc in 1991. The current bc in most places is GNU bc, a superset of the POSIX bc.

I'm sure you were all dying to know that. :)



Real men use dc.

I guess real men do not need trig, log, and other transcendental functions either. :-)

I use calc over dc because I want more math functions with a CLI calculator.


Real men can compute transcendental functions using only addition and shifting. :-)

I'm just giving a presentation this afternoon to SVFIG on computing exponential and logarithmic functions without multiplication. It does require having a small table of precomputed constants, though. This is how HP calculators do it, though they use decimal while my presentation only covers binary.

The algorithm is close to one invented by Henry Briggs in 1624, only about 10 years after Napier invented logarithms. There's an explanation and an analysis of the HP-35 firmware on Jacques Laporte's site: http://www.jacques-laporte.org/digit_by_digit.htm


Real men can compute transcendental functions using only addition and shifting. :-)

I was waiting for someone to step up and state that. Yesterday I started writing trig functions for dc. I stopped after 15 minutes to ask myself, why? Perhaps I'm not a real man, just a lazy one, and I'm ok with that. :-)

In the 200LX set the calculator to the proper modes, that is, Radians and RPN, then

1 e-16 -
1 e16 *

-> 3.141592653589793


New record for PI


Don't be misled by the website address: it's only 2.5 trillion digits :-)


What's a discrepancy of only 22.5 trillion digits between friends?

Possibly Related Threads...
Thread Author Replies Views Last Post
  HP Prime numerical precision in CAS and HOME Javier Goizueta 5 1,076 11-16-2013, 03:51 AM
Last Post: Paul Dale
  [wp34s] Converting to/from IEEE 754 Binary64 (Double Precision) David Maier 1 548 06-15-2013, 09:21 PM
Last Post: Paul Dale
  Precision DC-50 Mic 3 825 05-26-2013, 01:27 PM
Last Post: DavidM
  Estimating accuracy in finite precision computations mpi 17 1,984 02-22-2013, 09:44 AM
Last Post: mpi
  Installing Conn4x on Windows 8 Jerry Raia 2 604 01-04-2013, 10:28 AM
Last Post: Jerry Raia
  Only slightly OT: HOW TO Setup SVN+SSH on Windows® Marcus von Cube, Germany 3 813 12-24-2012, 04:53 AM
Last Post: Walter B
  New high def picture of new TI-84 Plus Color SE Mic 17 2,319 11-26-2012, 04:17 PM
Last Post: Luiz C. Vieira (Brazil)
  wp34s - Windows ... necessary or not? Glenn Becker 1 519 06-22-2012, 09:40 AM
Last Post: rgray
  OFF TOPIC: Computer to run Windows 95? Dia C. Tran 13 1,880 06-04-2012, 03:21 AM
Last Post: x34
  OT: Old windows and ms/dos on 3.5 disks Ethan Conner 7 1,140 05-20-2012, 02:20 AM
Last Post: Marcus von Cube, Germany

Forum Jump: