▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
It takes a seemingly long time but
F^1(0.95, 5, 10)
eventually returns the correct result of roughly 3.3258.
Regrettably, the reciprocal problem of
F^1(0.05, 10, 5)
should return the related result of roughly 1/3.3258 ~= 0.300676, but instead, after a few seconds, we get a Domain Error.
I played around with a few more smallish probabilities and I see the same thing.
This doesn't seem to be correct. I am guessing that the initial guess of the quantile is negative, and when this is fed to the Newton refiner it spits it out because it computes the F statistic with a negative estimated quantile upon which it chokes.
Any clues? This is obviously an error that should be trapped.
Les
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
Your guess is wrong. The initial estimate is 4.754601... which isn't very close but isn't negative. The Netwon steps seem to be converging for the first couple (1.1886504 & 0.000011886) but then it diverges and errors out somewhere.....
I'll have a look into this and see if I can figure out what is going awry.
At least it isn't giving an incorrect answer :)
 Pauli
▼
Posts: 653
Threads: 26
Joined: Aug 2010
Quote:
Your guess is wrong. The initial estimate is 4.754601... which isn't very close but isn't negative.
Well, it is very close  if n1 and n2 get swapped. In this case the correct quantile is 4,735063... ;) For further details see my other posts in this thread.
Dieter
Edited: 15 Apr 2012, 12:21 p.m.
Posts: 653
Threads: 26
Joined: Aug 2010
For the record, here's the method I used for the initial guess. I cannot remember I did any tests on this, so use at your own risk. ;)
In VBA it looks like this. First, there's the known initial approximation for the normal quantile. The 0.2/u term in the longest line may be replaced by 1/u^2.
Function z_guess(p)
Const root2pi = 2.506628274631
If p < 0.5 Then q = p Else q = 1  p
If q > 0.2 Then
u = root2pi * (0.5  q)
z = u + u ^ 3 / 6
Else
u = 2 * Log(q)
z = Sqr(2 * Log(q * root2pi * Sqr(u  1))) + 0.2 / u
End If
If p < 0.5 Then z = z
z_guess = z
End Function
The F guess then is evaluated as follows. Here, p can be used the way Les did.
Function f_guess(p, n1, n2)
r1 = 1 / (n1  1)
r2 = 1 / (n2  1)
h = 2 / (r1 + r2)
x = z_guess(p)
k = (x * x  3) / 6
w = x * Sqr(h + k) / h  (r1  r2) * (k + 5 / 6  2 / 3 / h)
f_guess = Exp(2 * w)
End Function
Here, f_guess(0.95, 5, 10) returns 3.303908... and f_guess(0.05, 10, 5) gives 0,3026718... In both cases, the correct 16digit result is obtained after just four iterations.
Dieter
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
Yes, the initial approximation is off. On the "w =" line the sign of x was wrong resulting in much badness. Fixing this results in the same initial estimates as you're getting which are much closer to the actual result  this distribution should speed up now.
I'm going to leave the newly added bisection guard in place, it won't cost any noticeable time if it isn't needed and it should help prevent any bad cases from wandering around aimlessly.
I already had a step size guard in place and this was obscuring the issue a bit, the jump back up was really a jump out to towards infinity but it was being curtailed to something more reasonable.
 Pauli
Posts: 3,229
Threads: 42
Joined: Jul 2006
I know what is happening now. The newton step is oscillating between a pair of values and the solver is giving up because the estimate gets worse not better.
Now to figure out how to fix this....
 Pauli
Posts: 3,229
Threads: 42
Joined: Jul 2006
Figured it out. I've modified the Newton solver to try bisection steps instead of Newton steps if things go badly. This seems to address the reported problem and will hopefully address many similar cases.
A Domain Error will still be raised if the search fails but I'd hope this is a lot less likely than before.
You'll have to wait for Marcus or Pascal to build new images/emulators.
 Pauli
▼
Posts: 653
Threads: 26
Joined: Aug 2010
Pauli,
I am not sure how the F quantile function finally was implemented, and for this distribution I only found some tests for an initial guess. However, my results are different. Please note that the probability here is given by q = 1p instead of p. This is also important because the F guess may include the guess for the normal quantile which has to carry the correct sign here!
Here are my results:
q = 0.05, n1 = 5, n2 = 10 => f_guess = 3.30
q = 0.95, n1 = 10, n2 = 5 => f_guess = 0.30 (= 1/3.30)
That's quite close to the exact value, isn't it ?)
Regarding the divergent Newton steps: you may want to check if the CDF returns results with sufficient accuracy. Then, I remember we had a discussion on a similar problem with another distribution. The simple and effective solution was limiting the correction to a maximum possible amount. Which here could be the correction applied in the previous iteration.
Addendum: I wrote a short program for the 34s emulator and tried the given example (0.05; 10; 5) with the initial guess based on the wrong probability 1p (iteration starts at 4,7 or 3,3 instead of 0.2 or 0.3). The program threw exactly the same domain error as it tried to evaluate the PDF for a large negative x. ;)
Then I tried the example with the correct initial guess 0.30 and the iteration converged to 34 digits (DBLON) within 5 or 6 Newton steps. :)
Maybe we tracked down the error. If we did, the whole F quantile should even get evaluated faster than before. Pauli, please check how the current implementation works if you replace 0.95 by 0.05 and vice versa and/or n1 and n2 are swapped. Also the guess for the normal quantile has to carry the correct sign.
Dieter
Edited: 15 Apr 2012, 12:26 p.m.
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
Dieter, this could explain why F^1(p) on the calc seems so slow when it doesn't throw an errorthere is a poor initial estimate of the quantile due to a parameter mixup, and the Newton refiner has to run several more times to find the result, if it can find it at all.
It looks to me that the Fdistribution is implemented as a special case of the incomplete beta function, which is computed via its continued fraction. Even still, F(3.3,5,10) is returned in under a second, and the corresponding PDF is even faster than that. If the inverse CDF requires an initial estimate that takes at most a couple of seconds and at most 4 Newton refinements that each compute an F CDF and PDF, the result should take 10 to 15 seconds at most not the painful 30 seconds or so that I was seeing. I experimented a lot too with the inverse t and chisquare routines, and though they took their time at times, they tended to return a result quickly. So the inverse F stood out.
I am glad for my decadelong obsession with the special functions and their statistical relatives, and was able to catch this.
Les
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
Yeah, these function and especially their inverses are quite slow. The Newton solver hides inefficiencies like a poor initial estimate all too well.
Hopefully better now or at least once the next build gets done.
 Pauli
Posts: 1,368
Threads: 212
Joined: Dec 2006
Dieter, Pauli, Walter, etc.,
Just deleted a longish post where I declared that the F CDF was not of full accuracy since the incomplete beta on which it relies was of not full accuracy.
I was misreading the extended display of my calc and on double checking my results the wp34s 34digit output for F(3.3,5,10) is off only 1ULP visàvis the Mathematica result. F(0.3,10,5) is off by 5ULP in the 34th digit, but that still isn't too bad given the several places in the code where full internal 39digit accuracy can be compromised. I would expect that this accuracy is plenty good enough for the Newton solver if the convergence criterion is something reasonablesay a relative error less than 1e32 as we see in some of the other code.
If anyone saw my misleading original post, please disregard it.
Les
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
This level of accuracy isn't guaranteed.
The Newton step solver terminates the search once the relative error between the previous and current terms is less than 1e24. The extra digits you are seeing are a result of the rapid convergence once it locks on to the solution.
I'll note that: we make absolutely no guarantees about the accuracy of anything in double precision mode, beyond "at least as good as single precision". In single precision, on the other hand, I want everything to be within 1 ULP and correctly rounded where possible over the sixteen digits available.
If people see sufficient need, I can make the relative error difference depend on the current mode of operations, rather than being fixed. Is these such a requirement? Does anyone really need these functions carrying thirty or more accurate digits? We're well beyond any reasonable requirement of any statistical analysis I've ever been part of or even seen.
The incomplete beta loops until convergence in all digits is achieved, so it should be fine. The normal distribution uses 1e32 for its convergence threshold.
 Pauli
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
I went ahead and made the accuracy change. Double precision mode waits until the relative error is 1e32 before terminating  this will almost certainly guarantee a correct result to 1ULP always. I'm guessing the cost for this is one or two extra iterations.
Single precision mode remains unchanged.
I've also changed the built in solver to use the same thresholds, so it should be better in double precision mode as well.
 Pauli
Posts: 1,368
Threads: 212
Joined: Dec 2006
Quote:
I'll note that: we make absolutely no guarantees about the accuracy of anything in double precision mode, beyond "at least as good as single precision". In single precision, on the other hand, I want everything to be within 1 ULP and correctly rounded where possible over the sixteen digits available.
You know, despite the lack of any such guarantee, I have been duly impressed at the full or near full double precision accuracy of many of the functions I have obsessively examined. That's why when I saw that F and incomplete beta seemed to be doing 1516 digits I thought something was wrong until I caught my own error.
I should clarify that I was commenting above on F and not the inverse F. Given your "we don't guarantee full double precision accuracy" claim, I have to say that within 1 to 5 ULP in the 34th digit is pretty darn good.
Thanks for mulling over this issue for me. You are quite right about real life requirements being much more modest than the pure quest for near full accuracy. I just think it's cool when it is approached or achieved. I have been fascinated by these functions and distributions since porting the Numerical Recipes code to FOCAL some years back. I has nothing to do with reallife need. In inferential statistics, we usually are happy with the quantile or p value being computed to at best a couple or three digits so as to make conclusion about the null hypothesis. In days of yore, tables to at most 5 digits were fine!
cheers,
Les
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
Quote: I should clarify that I was commenting above on F and not the inverse F. Given your "we don't guarantee full double precision accuracy" claim, I have to say that within 1 to 5 ULP in the 34th digit is pretty darn good.
Oh, I thought you were talking about the quantile function. There isn't a lot more I can easily do with the cdf. At least without carrying more digits and then having to worry about running out of stack space  we're close on these function, we don't know exactly how close however.
Quote: I have been fascinated by these functions and distributions since porting the Numerical Recipes code to FOCAL some years back.
Not a terribly well respected source for this kind of code, I believe. At least amongst people who know about such things.
 Pauli
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
Quote:
Not a terribly well respected source for this kind of code, I believe. At least amongst people who know about such things.
So I have since learned. That said, your code for the incomplete gamma and incomplete beta bears much similarity to the NR routines. I see this particularly in the betacf() routine where you compute the continued fraction to convergence as the NR books describeindeed, you seem to go with Modified Lentz for gcf() yet the older Wallis method for betacf() that was used in older editions of NR. I have learned over the years that the NR team has attempted to put forth as proprietary much which is already widely known through open sources.
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
I'm pretty sure I'm using the same algorithms as the NR books & I've most definitely looked at these books  I've got an ancient Fortran edition somewhere and my google searches for algorithms often ended up with them.
The series and continued fraction expansions for these two are pretty standard  or at least they are what I found when I went looking. I didn't keep all the references I'd used unfortunately. As for computing to complete convergence, I do this for several other functions  the trig functions e.g.
Anyway, if you know of a better method or algorithm to calculate these two functions, I'd definitely be interested in hearing about it (or better yet receiving code :) They are both fairly large and neither is very fast.
 Pauli
▼
Posts: 1,368
Threads: 212
Joined: Dec 2006
I just downloaded and flashed the most recent integration build and the inverse F CDF behaves just as I would expect it on a calculator. It is much faster than it was before, but still with the examples with which I started this thread chews on them for about 1015 s on a calc I know is in slow mode because of older batteries. The results match the Mathematica result to 34 digits within 1ULP. LIke you say, better than anyone in real life could reasonable want.
I am a hobbyist and a duffer at all this stuff and really don't know of a better way to do this. The F CDF and inverse CDF are more complicated at every stage than the normal, t, and chisquare. With this fix, I think we should be happy it does as well as it does.
Thanks for humouring me by working on this. I know that most users are probably more interested in the stopwatch fixes.
Les
▼
Posts: 3,229
Threads: 42
Joined: Jul 2006
Quote: Thanks for humouring me by working on this. I know that most users are probably more interested in the stopwatch fixes.
Not at all, you identified a problem with the fundamental purpose of the device (which the stopwatch isn't) and it had to be investigated. Strictly it wasn't an accuracy problem, since the device wasn't returning an incorrect value but still it needed to be addressed and in doing so we discovered a true bug and I made the statistical distribution search more stable as well.
I think this is a decent win.
 Pauli
PS: I'm really only a hobbyist at this too :)
Posts: 653
Threads: 26
Joined: Aug 2010
Les, I assume the error is in a wrong initial F estimate that is based on 1p instead of p and/or n1 and n2 are swapped. The correct guess should be much closer to the true quantile. Please see my reply to Pauli.
Dieter
Edited: 15 Apr 2012, 12:22 p.m.
