P-Value: Comprehensive Guide to Understand, Apply, and Interpret
Last Updated :
31 Jan, 2024
A p-value is a statistical metric used to assess a hypothesis by comparing it with observed data.
This article delves into the concept of p-value, its calculation, interpretation, and significance. It also explores the factors that influence p-value and highlights its limitations.
What is the P-value?
The p-value, or probability value, is a statistical measure used in hypothesis testing to assess the strength of evidence against a null hypothesis. It represents the probability of obtaining results as extreme as, or more extreme than, the observed results under the assumption that the null hypothesis is true.
In simpler words, it is used to reject or support the null hypothesis during hypothesis testing. In data science, it gives valuable insights on the statistical significance of an independent variable in predicting the dependent variable.
How P-value is calculated?
Calculating the p-value typically involves the following steps:
- Formulate the Null Hypothesis (H0): Clearly state the null hypothesis, which typically states that there is no significant relationship or effect between the variables.
- Choose an Alternative Hypothesis (H1): Define the alternative hypothesis, which proposes the existence of a significant relationship or effect between the variables.
- Determine the Test Statistic: Calculate the test statistic, which is a measure of the discrepancy between the observed data and the expected values under the null hypothesis. The choice of test statistic depends on the type of data and the specific research question.
- Identify the Distribution of the Test Statistic: Determine the appropriate sampling distribution for the test statistic under the null hypothesis. This distribution represents the expected values of the test statistic if the null hypothesis is true.
- Calculate the Critical-value: Based on the observed test statistic and the sampling distribution, find the probability of obtaining the observed test statistic or a more extreme one, assuming the null hypothesis is true.
- Interpret the results: Compare the critical-value with t-statistic. If the t-statistic is larger than the critical value, it provides evidence to reject the null hypothesis, and vice-versa.
Its interpretation depends on the specific test and the context of the analysis. Several popular methods for calculating test statistics that are utilized in p-value calculations.
|
Used when dealing with large sample sizes or when the population standard deviation is known.
| A small p-value (smaller than 0.05) indicates strong evidence against the null hypothesis, leading to its rejection.
|
Appropriate for small sample sizes or when the population standard deviation is unknown.
| Similar to the Z-test
|
Used for tests of independence or goodness-of-fit.
| A small p-value indicates that there is a significant association between the categorical variables, leading to the rejection of the null hypothesis.
|
Commonly used in Analysis of Variance (ANOVA) to compare variances between groups.
| A small p-value suggests that at least one group mean is different from the others, leading to the rejection of the null hypothesis.
|
Measures the strength and direction of a linear relationship between two continuous variables.
| A small p-value indicates that there is a significant linear relationship between the variables, leading to rejection of the null hypothesis that there is no correlation.
|
In general, a small p-value indicates that the observed data is unlikely to have occurred by random chance alone, which leads to the rejection of the null hypothesis. However, it's crucial to choose the appropriate test based on the nature of the data and the research question, as well as to interpret the p-value in the context of the specific test being used.
P-value in Hypothesis testing
The table given below shows the importance of p-value and shows the various kinds of errors that occur during hypothesis testing.
Truth /Decision
| Accept h0
| Reject h0
|
h0 -> true
| Correct decision based on the given p-value (1-α)
| Type I error (α)
|
h0 -> false
| Type II error (β)
| Incorrect decision based on the given p-value (1-β)
|
Type I error: Incorrect rejection of the null hypothesis. It is denoted by α (significance level).
Type II error: Incorrect acceptance of the null hypothesis. It is denoted by β (power level)
Let's consider an example to illustrate the process of calculating a p-value for Two Sample T-Test:
A researcher wants to investigate whether there is a significant difference in mean height between males and females in a population of university students.
Suppose we have the following data:
- Group 1 (Males): n1 = 30, \overline{x_1} = 175
and s1=5
- Group 2 ( Females): n2=35, \overline{x_2} = 168
and s2 =6
Starting with interpreting the process of calculating p-value
Step 1: Formulate the Null Hypothesis (H0):
H0: There is no significant difference in mean height between males and females.
Step 2: Choose an Alternative Hypothesis (H1):
H1: There is a significant difference in mean height between males and females.
Step 3: Determine the Test Statistic:
The appropriate test statistic for this scenario is the two-sample t-test, which compares the means of two independent groups.
The t-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.
t = \frac{\overline{x_1} - \overline{x_2}}{ \sqrt{\frac{(s_1)^2}{n_1} + \frac{(s_2)^2}{n_2}}}
Where,
- \overline{x_1}
is the mean of the first sample
- \overline{x_2}
is the mean of the second sample
- s1 = First sample’s standard deviation
- s2 = Second sample’s standard deviation
- n1 = First sample’s sample size
- n2 = Second sample’s sample size
Therefore,
\begin{aligned}t &= \frac{175 - 168}{\sqrt{\frac{5^2}{30} + \frac{6^2}{35}}}\\&= \frac{7}{\sqrt{0.8333 + 1.0286}}\\&= \frac{7}{\sqrt{1.8619}}\\& \approx \frac{7}{1.364}\\& \approx 5.13\end{aligned}
So, the calculated two-sample t-test statistic (t) is approximately 5.13.
Step 4: Identify the Distribution of the Test Statistic:
The t-distribution is used for the two-sample t-test. The degrees of freedom for the t-distribution are determined by the sample sizes of the two groups.
The t-distribution is a probability distribution with tails that are thicker than those of the normal distribution.
df = (n_1+n_2)-2
- where, n1 is total number of values for 1st category.
- n2 is total number of values for 2nd category.
So, df= (30+35)-2=63
The degrees of freedom (63) represent the variability available in the data to estimate the population parameters. In the context of the two-sample t-test, higher degrees of freedom provide a more precise estimate of the population variance, influencing the shape and characteristics of the t-distribution.
T-Statistic
The t-distribution is symmetric and bell-shaped, similar to the normal distribution. As the degrees of freedom increase, the t-distribution approaches the shape of the standard normal distribution. Practically, it affects the critical values used to determine statistical significance and confidence intervals.
Step 5: Calculate Critical Value.
To find the critical t-value with a t-statistic of 5.13 and 63 degrees of freedom, we can either consult a t-table or use statistical software.
We can use scipy.stats
module in Python to find the critical t-value using below code.
Python3
import scipy.stats as stats
t_statistic = 5.13
degrees_of_freedom = 63
alpha = 0.05
critical_t_value = stats.t.ppf(1 - alpha/2, degrees_of_freedom)
print(f"Critical t-value at alpha={alpha} , df:{degrees_of_freedom} and {critical_t_value}")
Output:
Critical t-value at alpha=0.05 , df:63 and 1.9983405417721956
Comparing with T-Statistic:
Since, 1.9983<5.13
The larger t-statistic suggests that the observed difference between the sample means is unlikely to have occurred by random chance alone. Therefore, we reject the null hypothesis.
How to interpret p-value?
To interpret the p-value, you need to compare it to a chosen significance level (\alpha) . During hypothesis testing, we assume a significance level (α), generally 5% (α = 0.05). It is the probability of rejecting the null hypothesis when it is true. It is observed that lower the p-value, higher is the probability of rejecting the null hypothesis. When:
- p ≤ (α = 0.05) : Reject the null hypothesis. There is sufficient evidence to conclude that the observed effect or relationship is statistically significant, meaning it is unlikely to have occurred by chance alone.
- p > (α = 0.05) : reject alternate hypothesis (or accept null hypothesis). The observed effect or relationship does not provide enough evidence to reject the null hypothesis. This does not necessarily mean there is no effect; it simply means the sample data does not provide strong enough evidence to rule out the possibility that the effect is due to chance.
In case the significance level is not specified, consider the below general inferences while interpreting your results.
- If p > .10: not significant
- If p ≤ .10: slightly significant
- If p ≤ .05: significant
- If p ≤ .001: highly significant
Graphically, the p-value is located at the tails of any confidence interval. [As shown in fig 1]
Fig 1: Graphical Representation What influences p-value?
The p-value in hypothesis testing is influenced by several factors:
- Sample Size: Larger sample sizes tend to yield smaller p-values, increasing the likelihood of detecting significant effects.
- Effect Size: A larger effect size results in smaller p-values, making it easier to detect a significant relationship.
- Variability in the Data: Greater variability often leads to larger p-values, making it harder to identify significant effects.
- Significance Level: A lower chosen significance level increases the threshold for considering p-values as significant.
- Choice of Test: Different statistical tests may yield different p-values for the same data.
- Assumptions of the Test: Violations of test assumptions can impact p-values.
Understanding these factors is crucial for interpreting p-values accurately and making informed decisions in hypothesis testing.
Significance of P-value
- The p-value provides a quantitative measure of the strength of the evidence against the null hypothesis.
- Decision-Making in Hypothesis Testing
- P-value serves as a guide for interpreting the results of a statistical test. A small p-value suggests that the observed effect or relationship is statistically significant, but it does not necessarily mean that it is practically or clinically meaningful.
Limitations of P-value
- The p-value is not a direct measure of the effect size, which represents the magnitude of the observed relationship or difference between variables. A small p-value does not necessarily mean that the effect size is large or practically meaningful.
- Influenced by Various Factors
The p-value is a crucial concept in statistical hypothesis testing, serving as a guide for making decisions about the significance of the observed relationship or effect between variables.
Implementing P-value in Python
Let's consider a scenario where a tutor believes that the average exam score of their students is equal to the national average (85). The tutor collects a sample of exam scores from their students and performs a one-sample t-test to compare it to the population mean (85).
- The code performs a one-sample t-test to compare the mean of a sample data set to a hypothesized population mean.
- It utilizes the scipy.stats library to calculate the t-statistic and p-value. SciPy is a Python library that provides efficient numerical routines for scientific computing.
- The p-value is compared to a significance level (alpha) to determine whether to reject the null hypothesis.
Python3
import scipy.stats as stats
# exam scores
sample_data = [78, 82, 88, 95, 79, 92, 85, 88, 75, 80]
# Population mean
population_mean = 85
# One-sample t-test
t_stat, p_value = stats.ttest_1samp(sample_data, population_mean)
print("t-statistic:", t_stat)
print("p-value:", p_value)
# Conditions
alpha = 0.05
if p_value < alpha:
print("Reject the null hypothesis. There is enough evidence to suggest a significant difference.")
else:
print("Fail to reject the null hypothesis. The difference is not statistically significant.")
Output:
t-statistic: -0.3895364838967159
p-value: 0.7059365203154573
Fail to reject the null hypothesis. The difference is not statistically significant.
Since, 0.7059>0.05, we would conclude to fail to reject the null hypothesis. This means that, based on the sample data, there isn't enough evidence to claim a significant difference in the exam scores of the tutor's students compared to the national average. The tutor would accept the null hypothesis, suggesting that the average exam score of their students is statistically consistent with the national average.
Applications of p-value
- During Forward and Backward propagation: When fitting a model (say a Multiple Linear Regression model), we use the p-value in order to find the most significant variables that contribute significantly in predicting the output.
- Effects of various drug medicines: It is highly used in the field of medical research in determining whether the constituents of any drug will have the desired effect on humans or not. P-value is a very strong statistical tool used in hypothesis testing. It provides a plethora of valuable information while making an important decision like making a business intelligence inference or determining whether a drug should be used on humans or not, etc. For any doubt/query, comment below.
Conclusion
The p-value is a crucial concept in statistical hypothesis testing, providing a quantitative measure of the strength of evidence against the null hypothesis. It guides decision-making by comparing the p-value to a chosen significance level, typically 0.05. A small p-value indicates strong evidence against the null hypothesis, suggesting a statistically significant relationship or effect. However, the p-value is influenced by various factors and should be interpreted alongside other considerations, such as effect size and context.
Similar Reads
Data Analysis with Python
In this article, we will discuss how to do data analysis with Python. We will discuss all sorts of data analysis i.e. analyzing numerical data with NumPy, Tabular data with Pandas, data visualization Matplotlib, and Exploratory data analysis.Data Analysis With Python Data Analysis is the technique o
15+ min read
Introduction to Data Analysis
Data Visulization Libraries
Matplotlib Tutorial
Matplotlib is an open-source visualization library for the Python programming language, widely used for creating static, animated and interactive plots. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, Qt, GTK and wxPython. It
5 min read
Python Seaborn Tutorial
Seaborn is a library mostly used for statistical plotting in Python. It is built on top of Matplotlib and provides beautiful default styles and color palettes to make statistical plots more attractive.In this tutorial, we will learn about Python Seaborn from basics to advance using a huge dataset of
15+ min read
Plotly tutorial
Plotly library in Python is an open-source library that can be used for data visualization and understanding data simply and easily. Plotly supports various types of plots like line charts, scatter plots, histograms, box plots, etc. So you all must be wondering why Plotly is over other visualization
15+ min read
Introduction to Bokeh in Python
Bokeh is a Python interactive data visualization. Unlike Matplotlib and Seaborn, Bokeh renders its plots using HTML and JavaScript. It targets modern web browsers for presentation providing elegant, concise construction of novel graphics with high-performance interactivity. Features of Bokeh: Some o
1 min read
Exploratory Data Analysis (EDA)
Univariate, Bivariate and Multivariate data and its analysis
In this article,we will be discussing univariate, bivariate, and multivariate data and their analysis. Univariate data: Univariate data refers to a type of data in which each observation or data point corresponds to a single variable. In other words, it involves the measurement or observation of a s
5 min read
Measures of Central Tendency in Statistics
Central tendencies in statistics are numerical values that represent the middle or typical value of a dataset. Also known as averages, they provide a summary of the entire data, making it easier to understand the overall pattern or behavior. These values are useful because they capture the essence o
11 min read
Measures of Spread - Range, Variance, and Standard Deviation
Collecting the data and representing it in form of tables, graphs, and other distributions is essential for us. But, it is also essential that we get a fair idea about how the data is distributed, how scattered it is, and what is the mean of the data. The measures of the mean are not enough to descr
8 min read
Interquartile Range and Quartile Deviation using NumPy and SciPy
In statistical analysis, understanding the spread or variability of a dataset is crucial for gaining insights into its distribution and characteristics. Two common measures used for quantifying this variability are the interquartile range (IQR) and quartile deviation. Quartiles Quartiles are a kind
5 min read
Anova Formula
ANOVA Test, or Analysis of Variance, is a statistical method used to test the differences between the means of two or more groups. Developed by Ronald Fisher in the early 20th century, ANOVA helps determine whether there are any statistically significant differences between the means of three or mor
7 min read
Skewness of Statistical Data
Skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. In simpler terms, it indicates whether the data is concentrated more on one side of the mean compared to the other side.Why is skewness important?Understanding the skewness of data
5 min read
How to Calculate Skewness and Kurtosis in Python?
Skewness is a statistical term and it is a way to estimate or measure the shape of a distribution. Â It is an important statistical methodology that is used to estimate the asymmetrical behavior rather than computing frequency distribution. Skewness can be two types: Symmetrical: A distribution can b
3 min read
Difference Between Skewness and Kurtosis
What is Skewness? Skewness is an important statistical technique that helps to determine the asymmetrical behavior of the frequency distribution, or more precisely, the lack of symmetry of tails both left and right of the frequency curve. A distribution or dataset is symmetric if it looks the same t
4 min read
Histogram | Meaning, Example, Types and Steps to Draw
What is Histogram?A histogram is a graphical representation of the frequency distribution of continuous series using rectangles. The x-axis of the graph represents the class interval, and the y-axis shows the various frequencies corresponding to different class intervals. A histogram is a two-dimens
5 min read
Interpretations of Histogram
Histograms helps visualizing and comprehending the data distribution. The article aims to provide comprehensive overview of histogram and its interpretation. What is Histogram?Histograms are graphical representations of data distributions. They consist of bars, each representing the frequency or cou
7 min read
Box Plot
Box Plot is a graphical method to visualize data distribution for gaining insights and making informed decisions. Box plot is a type of chart that depicts a group of numerical data through their quartiles. In this article, we are going to discuss components of a box plot, how to create a box plot, u
7 min read
Quantile Quantile plots
The quantile-quantile( q-q plot) plot is a graphical method for determining if a dataset follows a certain probability distribution or whether two samples of data came from the same population or not. Q-Q plots are particularly useful for assessing whether a dataset is normally distributed or if it
8 min read
What is Univariate, Bivariate & Multivariate Analysis in Data Visualisation?
Data Visualisation is a graphical representation of information and data. By using different visual elements such as charts, graphs, and maps data visualization tools provide us with an accessible way to find and understand hidden trends and patterns in data. In this article, we are going to see abo
3 min read
Using pandas crosstab to create a bar plot
In this article, we will discuss how to create a bar plot by using pandas crosstab in Python. First Lets us know more about the crosstab, It is a simple cross-tabulation of two or more variables. What is cross-tabulation? It is a simple cross-tabulation that help us to understand the relationship be
3 min read
Exploring Correlation in Python
This article aims to give a better understanding of a very important technique of multivariate exploration. A correlation Matrix is basically a covariance matrix. Also known as the auto-covariance matrix, dispersion matrix, variance matrix, or variance-covariance matrix. It is a matrix in which the
4 min read
Covariance and Correlation
Covariance and correlation are the two key concepts in Statistics that help us analyze the relationship between two variables. Covariance measures how two variables change together, indicating whether they move in the same or opposite directions. Relationship between Independent and dependent variab
5 min read
Factor Analysis | Data Analysis
Factor analysis is a statistical method used to analyze the relationships among a set of observed variables by explaining the correlations or covariances between them in terms of a smaller number of unobserved variables called factors. Table of Content What is Factor Analysis?What does Factor mean i
13 min read
Data Mining - Cluster Analysis
Data mining is the process of finding patterns, relationships and trends to gain useful insights from large datasets. It includes techniques like classification, regression, association rule mining and clustering. In this article, we will learn about clustering analysis in data mining.Understanding
6 min read
MANOVA Test in R Programming
Multivariate analysis of variance (MANOVA) is simply an ANOVA (Analysis of variance) with several dependent variables. It is a continuation of the ANOVA. In an ANOVA, we test for statistical differences on one continuous dependent variable by an independent grouping variable. The MANOVA continues th
4 min read
MANOVA Test in R Programming
Multivariate analysis of variance (MANOVA) is simply an ANOVA (Analysis of variance) with several dependent variables. It is a continuation of the ANOVA. In an ANOVA, we test for statistical differences on one continuous dependent variable by an independent grouping variable. The MANOVA continues th
4 min read
Python - Central Limit Theorem
Central Limit Theorem (CLT) is a foundational principle in statistics, and implementing it using Python can significantly enhance data analysis capabilities. Statistics is an important part of data science projects. We use statistical tools whenever we want to make any inference about the population
7 min read
Probability Distribution Function
Probability Distribution refers to the function that gives the probability of all possible values of a random variable.It shows how the probabilities are assigned to the different possible values of the random variable.Common types of probability distributions Include: Binomial Distribution.Bernoull
8 min read
Probability Density Estimation & Maximum Likelihood Estimation
Probability density and maximum likelihood estimation (MLE) are key ideas in statistics that help us make sense of data. Probability Density Function (PDF) tells us how likely different outcomes are for a continuous variable, while Maximum Likelihood Estimation helps us find the best-fitting model f
8 min read
Exponential Distribution in R Programming - dexp(), pexp(), qexp(), and rexp() Functions
The Exponential Distribution is a continuous probability distribution that models the time between independent events occurring at a constant average rate. It is widely used in fields like reliability analysis, queuing theory, and survival analysis. The exponential distribution is a special case of
5 min read
Mathematics | Probability Distributions Set 4 (Binomial Distribution)
The previous articles talked about some of the Continuous Probability Distributions. This article covers one of the distributions which are not continuous but discrete, namely the Binomial Distribution.Introduction -To understand the Binomial distribution, we must first understand what a Bernoulli T
5 min read
Poisson Distribution | Definition, Formula, Table and Examples
The Poisson distribution is a discrete probability distribution that calculates the likelihood of a certain number of events happening in a fixed time or space, assuming the events occur independently and at a constant rate.It is characterized by a single parameter, λ (lambda), which represents the
11 min read
P-Value: Comprehensive Guide to Understand, Apply, and Interpret
A p-value is a statistical metric used to assess a hypothesis by comparing it with observed data. This article delves into the concept of p-value, its calculation, interpretation, and significance. It also explores the factors that influence p-value and highlights its limitations. Table of Content W
12 min read
Z-Score in Statistics | Definition, Formula, Calculation and Uses
Z-Score in statistics is a measurement of how many standard deviations away a data point is from the mean of a distribution. A z-score of 0 indicates that the data point's score is the same as the mean score. A positive z-score indicates that the data point is above average, while a negative z-score
15+ min read
How to Calculate Point Estimates in R?
Point estimation is a technique used to find the estimate or approximate value of population parameters from a given data sample of the population. The point estimate is calculated for the following two measuring parameters:Measuring parameterPopulation ParameterPoint EstimateProportionÏp MeanμxÌ Th
3 min read
Confidence Interval
A Confidence Interval (CI) is a range of values that contains the true value of something we are trying to measure like the average height of students or average income of a population.Instead of saying: âThe average height is 165 cm.âWe can say: âWe are 95% confident the average height is between 1
7 min read
Chi-square test in Machine Learning
Chi-Square test helps us determine if there is a significant relationship between two categorical variables and the target variable. It is a non-parametric statistical test meaning it doesnât follow normal distribution. Example of Chi-square testThe Chi-square test compares the observed frequencies
7 min read
Hypothesis Testing
Hypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
Time Series Data Analysis
Data Mining - Time-Series, Symbolic and Biological Sequences Data
Data mining refers to extracting or mining knowledge from large amounts of data. In other words, Data mining is the science, art, and technology of discovering large and complex bodies of data in order to discover useful patterns. Theoreticians and practitioners are continually seeking improved tech
3 min read
Basic DateTime Operations in Python
Python has an in-built module named DateTime to deal with dates and times in numerous ways. In this article, we are going to see basic DateTime operations in Python. There are six main object classes with their respective components in the datetime module mentioned below: datetime.datedatetime.timed
12 min read
Time Series Analysis & Visualization in Python
Time series data consists of sequential data points recorded over time which is used in industries like finance, pharmaceuticals, social media and research. Analyzing and visualizing this data helps us to find trends and seasonal patterns for forecasting and decision-making. In this article, we will
6 min read
How to deal with missing values in a Timeseries in Python?
It is common to come across missing values when working with real-world data. Time series data is different from traditional machine learning datasets because it is collected under varying conditions over time. As a result, different mechanisms can be responsible for missing records at different tim
9 min read
How to calculate MOVING AVERAGE in a Pandas DataFrame?
Calculating the moving average in a Pandas DataFrame is used for smoothing time series data and identifying trends. The moving average, also known as the rolling mean, helps reduce noise and highlight significant patterns by averaging data points over a specific window. In Pandas, this can be achiev
7 min read
What is a trend in time series?
Time series data is a sequence of data points that measure some variable over ordered period of time. It is the fastest-growing category of databases as it is widely used in a variety of industries to understand and forecast data patterns. So while preparing this time series data for modeling it's i
3 min read
How to Perform an Augmented Dickey-Fuller Test in R
Augmented Dickey-Fuller Test: It is a common test in statistics and is used to check whether a given time series is at rest. A given time series can be called stationary or at rest if it doesn't have any trend and depicts a constant variance over time and follows autocorrelation structure over a per
3 min read
AutoCorrelation
Autocorrelation is a fundamental concept in time series analysis. Autocorrelation is a statistical concept that assesses the degree of correlation between the values of variable at different time points. The article aims to discuss the fundamentals and working of Autocorrelation. Table of Content Wh
10 min read
Case Studies and Projects