0% found this document useful (0 votes)
22 views8 pages

Statistical Reasons Behind The Comparaison of F-Stat With P-Value in The Null Hypothesis Test - ResearchGate

zzzzzmmmm

Uploaded by

tanvir anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views8 pages

Statistical Reasons Behind The Comparaison of F-Stat With P-Value in The Null Hypothesis Test - ResearchGate

zzzzzmmmm

Uploaded by

tanvir anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ?

| ResearchGate

Most recent answer

Belgacem Mohamed El Ghazali


University of Science and Technology Houari Boumediene

You are welcome

Cite

Popular answers (1)

Jochen Wilhelm
Justus-Liebig-Universität Gießen

You don't compare an F-value to a P-value. You compare the (observed) F-value to the critical F-value. The
critical F-value is a value for which Pr(F > Fcrit | H0) = alpha, with alpha being the level of significance (the
"size" of the test). If F > Fcrit you know that this probability is smaller than alpha, in which case you reject
H0.

Today, we don't need to use tables with Fcrit and can calculate Pr(F>Fobs | H0) directly. This is the p-value,
and we can compare this p-value to alpha and reject H0 if p < alpha.

We NEVER accept H0. Failure to reject H0 means that we don't have enough data to interpret the statistic
w.r.t. H0.

The reason behind this procedure is to dare an interpretation of a statistic calculated from the data (e.g. a
mean difference, or the slope o a regression line, or an odds ratio, etc.) only if the data provides enough
information about this statistic. The statistical significance is a proxy for this amount of information.

Cite 6 Recommendations

Top contributors to discussions in this field

Anuraj Nayarisseri
EMINENT BIOSCIENCES

Sal Mangiafico
Rutgers, The State University of New Jersey

David Morse
Mississippi State University (Emeritus)

Dhritikesh Chakrabarty
Handique Girls' College

James R Knaub
Retired US Fed Govt/Home Research

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 1/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

ResearchGate Logo

Get help with your research

Join ResearchGate to ask questions, get input, and advance your work.

Join for free Log in

All Answers (11)

David L Morgan
Portland State University

We can never prove that a hypothesis is true because there are so many other factors that might be
involved. Instead, we can only minimize the likelihood that the results is due to chance alone. Selecting a p
value such as .05 determines how strongly you will rule out chance. Based on this logic, you can reject a
hypothesis whenever the results could have been to chance. However, a failure to reject (i.e., significant
results) does not allow you to "affirm" the hypothesis -- it only allows you say that the results are unlikely to
be due to chance.

Cite 2 Recommendations
Jochen Wilhelm
Justus-Liebig-Universität Gießen

You don't compare an F-value to a P-value. You compare the (observed) F-value to the critical F-value. The
critical F-value is a value for which Pr(F > Fcrit | H0) = alpha, with alpha being the level of significance (the
"size" of the test). If F > Fcrit you know that this probability is smaller than alpha, in which case you reject
H0.

Today, we don't need to use tables with Fcrit and can calculate Pr(F>Fobs | H0) directly. This is the p-value,
and we can compare this p-value to alpha and reject H0 if p < alpha.

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 2/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

We NEVER accept H0. Failure to reject H0 means that we don't have enough data to interpret the statistic
w.r.t. H0.

The reason behind this procedure is to dare an interpretation of a statistic calculated from the data (e.g. a
mean difference, or the slope o a regression line, or an odds ratio, etc.) only if the data provides enough
information about this statistic. The statistical significance is a proxy for this amount of information.
Cite 6 Recommendations

Busari Yusuf

You are correct @Jochen Wilhelm, but he is asking that at what level will F-Stat be at a particular P-value
to determine when to reject or accept hypothesis.

So the larger the F-stat the smaller the P-value to the alpha. Therefor you may have to not reject the null
hypothesis. Vise versa. This shows that critical value is larger than the calculated value and therefore 0<
.05.

Cite 1 Recommendation

Busari Yusuf

You may also want to read https://www.statisticshowto.com/probability-and-statistics/f-statistic-value-


test/#:~:text=The%20F%20value%20in%20one,one%20that%20was%20actually%20observed%2C

Cite 1 Recommendation
Belgacem Mohamed El Ghazali
University of Science and Technology Houari Boumediene

Dear all,

Thank you so much all for sharing your knowledge.

However, I am not accepting when we reject or we accept, I am asking the reasons (statistical reasons)
behind this comparison from which we decide.

I appreciate all your answers

Cite
Jochen Wilhelm
Justus-Liebig-Universität Gießen

Are you not satisfied with the statistucal reasons behind this procedure I gave? And if so: why?

Cite
Belgacem Mohamed El Ghazali
University of Science and Technology Houari Boumediene

Dear Prof. Jochen Wilhelm,

My answer was not about satisfied or not and I am trying to understand your answer. However, I re-asked
my questions just to be more clear for the others.

I will ask you again if I do not understand or for more understanding

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 3/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

Thank you so much sir


Cite

David Eugene Booth


Kent State University

Prof Wilhelm Is trying to teach some statistics. The questioner is comparing a statistic(F) to a a probability
(p=value) ie this does not make sense eg does your fish taste red? it makes no sense. Wilhelm wants the
questioner to figure out why it makes no sense and then see what should be done. This is called discovery
learning .I believe. D. Booth

Cite 2 Recommendations
Everlyne Akello
Ohio University

You don't compare the p-value and F statistic but you can use both as part of your conclusion. The F
statistic can be compared to the F critical value, If the F statistic is > than F critical value then you reject the
Ho. However, even if the Fstat is > than the Fcrit, but the P value is non significant basing on the alpha
level used, we still fail to reject the null hypothesis for group differences.

Cite
Belgacem Mohamed El Ghazali
University of Science and Technology Houari Boumediene

You are welcome

Cite

Similar questions and discussions

Can anyone tell about the information of F-Value and P-Value in ANOVA tables?
Question 21 answers
Asked 12th Feb, 2015
Sivakumar Periyasamy
In statistical analysis an ANOVA table gives the values of F and P- value. What information are you getting
from the table.

View
What is the best method for comparing multiple mean scores within a single group?

Question 5 answers
Asked 3rd Apr, 2023
Inwook Kwon
Hi,

I have 4 variables which are public attitudes towards 4 subgroups of immigrants.

I want to compare the mean scores of the 4 variables in terms of the whole sample.

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 4/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

I can't use One-way ANOVA or ANCOVA since I don't have groups to compare; the whole sample is the
group itself.

I can run pairwise t-tests multiple times (for every combination of the two variables), but I am worried about
increasing type 1 error.

I also thought about Repeated measures ANOVA since it can analyze all the 4 mean scores
simultaneously, even with some control variables.

However, I wonder if the fact that the variables are technically not "repeated measures" of the same
construct matters.

What would be the best method of comparing the mean scores in this case? Any other methods would be
welcomed.

View
Which statistical analysis should I use?
Question 4 answers
Asked 4th Apr, 2023
Guy Rothweiler
Hi everyone, I have some trouble finding the correct method for statistical analysis. I was thinking about a
two-tailed paired T test, but that only considers the mean value of my replicates and not the distribution of
the individual replicates as well.

My data set consists of 4 groups that are divided based on percentages (together 100%).

These groups are dependent on one variable (control, A, B, C, D, E and F) and I want to know whether
condition A, B, C etc. is significantly different from the control.

I have 3 replicates of the experiment (with some measurement variance).

View
How to detect the source of collinearity in EFA?
Question 7 answers
Asked 16th Mar, 2023
Ali Zia-Tohidi
In the context of regression, methods for detecting collinearity are well described in the literature. In the
context of exploratory factor analysis (EFA), however, I am facing a situation where I get highly unstable
factor loadings and factor correlations from different boot-strapped samples (despite a sample size of more
than 700), which I assume is due to multicollinearity. I am not sure how to explore its source. The common
methods rely on the intercorrelation among explanatory variables (i.e., the latent factors in EFA), but the
intercorrelation its-self is highly unstable.

My search for finding references on this topic was not much successful. Any explanation and/or reference
on multicollinearity in EFA would be appreciated.

Thank you in advance,

Ali

View

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 5/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

What are the impacts of human activities on the natural environment, and how to reduce these
impacts and achieve ecological balance?
Question 23 answers

Asked 7th Mar, 2023


Jin Hu
Human activities, such as resource extraction, industrial production, and urbanization, have caused
significant damage to the natural environment, including biodiversity loss, soil degradation, and water
pollution. This question examines the causes and consequences of these impacts and explores strategies
to mitigate them and restore ecological balance.

View
What are the recommended algorithms for predictive modelling (continuous data) ?
Question 6 answers
Asked 1st Nov, 2022
Vikneswaran Jeya Kumaran
I would like to know what regression algorithm works best with complex data with good accuracy and
RMSE score.

View
Is there any correlation between p-value and coefficient?
Discussion 5 replies
Asked 17th Sep, 2022
Hoang T.P.M. Le
Hi, everyone! My paper is in the under-review stage. I used MGA to compare two groups of the brand: high
level vs. low level. Fig. 3 shows the low-level group's result after running MGA in AMOS 22.0.

One of reviewers argues as below

"The data analysis results have many errors, so the results confuse me. For example, in Fig 3., The
standardized estimate 0.918 is *p<0.05 but 0.484 is ***p<0.001. I think that 0.918 should be more
significant."

Is there any evidence to prove that there is no correlation between standardized estimates and p-value?
For example, that you got a very small p-value does not mean that the coefficient will be large. A higher p-
value can associate with a high coefficient.

I read some results on the internet and some of them got similar results, but there are no specific academic
papers that I can cite in my paper as strong evidence for the response to the reviewer.

Could you please help to explain this result with scientific references?

View
Where do we find the morphometric tool for ARC GIS??
Question 3 answers
Asked 24th Jun, 2022
Sandeep Thakur

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 6/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

I need to do a morphometric analysis of a watershed using ARC GIS software. However I am unable to find
the morphometric toolbox to perform the analysis. Please advice where can I get the tool box from??
View
How to download 2012-2006 satellite imagery (for land use land cover analysis) that is free from
Scan line error?
Question 5 answers
Asked 4th Mar, 2022
Ojeniran Theophilus Ayooluwa
Land imagery between year 2012 to 2006 that is free from scan line error

View

Related Publications

Consistency of p-Values Obtained by Averaging Over Nuisance Parameters


Article
Mar 2013
Yan Ling Paul I. Nelson
Conditions are given and studied for a p-value obtained by averaging over a distribution on nuisance
parameters, denoted , to be a consistent estimator of the true p-value expressed as a function of those
unknown parameters. Consistency is shown to hold for the K-sample Behrens-Fisher problem. Although
highly desirable, consistency is a special pro...
View
Hypothesis Testing and p-Values
Chapter
Jun 2023
Andrew Owen
This chapter begins with an explanation of hypothesis testing followed by a description of standard
statistical tests that are commonly used in the literature (t-test, Chi Square test etc.). Such tests, although
often mentioned, do not usually form the basis of a publication. P-values are then discussed in more detail.
The choice of one-sided or tw...

View
A note on p-values interpreted as plausibilities
Article Full-text available
Oct 2014
Ryan Martin Chuanhai Liu
P-values are a mainstay in statistics but are often misinterpreted. We propose a new interpretation of p-
value as a meaningful plausibility, where this is to be interpreted formally within the inferential model
framework. We show that, for most practical hypothesis testing problems, there exists an inferential model
such that the corresponding plau...

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 7/8
6/21/23, 4:19 PM Statistical reasons behind the comparaison of F-stat with P-value in the null hypothesis test ? | ResearchGate

View

https://www.researchgate.net/post/Statistical_reasons_behind_the_comparaison_of_F-stat_with_P-value_in_the_null_hypothesis_test 8/8

You might also like