Following on from the above descriptions,
In linear regression we can calculate r = slope * sd_x / sd_y
and r is then commonly used as a standardized regression coefficient, i.e., how many sds of y are predicted for a 1 sd change in x. This can be useful when the scales of measurement don't have a natural interpretation and as a measure of linear association, the Pearson correlation coefficient of 2 variables.
r^2 = is often used as a summary measure of explained variance, but the sign of r gives information about the direction of association.
For a single coefficient, F = r^2 / (1-r^2) * (n-2) = t^2,
where t = coef/se is the t-test for the addition of the coefficient to the model, i.e., that the slope differs from zero.
So at the level of the simple linear regressions of the HP calculators (i.e., containing 1 predictor) these quantities are related to one another and some can be considered redundant, e.g, why would one compute an ANOVA for a linear regression with a single variable, if t is provided by a package? --- significance tests using F or t will give the same p-value.
Nick
Edited: 5 Apr 2012, 5:38 a.m.