Partition Value | Quartiles, Deciles and Percentiles
Last Updated :
30 Jul, 2025
Partition values are statistical measures that divide a dataset into equal parts to help in understanding the distribution and spread of data by indicating where certain percentages of the data fall. The most commonly used partition values are quartiles, deciles, and percentiles.

Quartiles, deciles, and Percentiles of partition values represent various perspectives on the same subject. To put it another way, these are values that partition the same collection of observations in several ways. As a result, it can divide these into many equal parts.
- Each quartile splits the total area into 4 equal parts (0.25 each).
- Each decile splits the total area into 10 equal parts (0.10 each).
- Each percentile splits the total area into 100 equal parts (0.01 each).
Quartiles
Quartiles divide a dataset into four equal parts, each containing 25% of the data.
The three quartiles are:
- Q1 (First Quartile/ Lower Quartile): 25% of the data fall below this value.
- Q2 (Second Quartile / Median): 50% of the data fall below this value.
- Q3 (Third Quartile / Upper Quartile): 75% of the data fall below this value.
The general equation to find the position of the Quartile is:
Q_k=\frac{k(N+1)}{4},for \:k=1,2, and \: 3.
When we put the values we get,
Q_{1}=[\frac{N+1}{4}]^{th}~item
Q_{2}=[\frac{N+1}{2}]^{th}~item
Q_{3}=[\frac{3(N+1)}{4}]^{th}~item
where, n is the total number of observations, Q1 is First Quartile, Q2 is Second Quartile, and Q3 is Third Quartile.
Example 1:
Calculate the lower and upper quartiles of the following weights in the family: 25, 17, 32, 11, 40, 35, 13, 5, and 46.
Age (in years) | Number of Employees |
---|
500 - 600 | 10 |
---|
600 - 700 | 12 |
---|
700 - 800 | 16 |
---|
800 - 900 | 14 |
---|
900 - 1000 | 8 |
---|
Solution:
Age (in years) | Number of Employee | Cumulative |
---|
500 - 600 | 10 | 10 |
---|
600 - 700 | 12 | 22(m) |
---|
700 - 800 | 16 (f) | 38 |
---|
800 - 900 | 14 | 52 |
---|
900 - 1000 | 8 | 60 |
---|
First of all organise the numbers in ascending order. 5, 11, 13, 17, 25, 32, 35, 40, 46
Lower quartile, Q_{1}=[\frac{N+1}{4}]^{th}~item
Q_{1}=[\frac{9+1}{4}]^{th}~item
Q1 = 2.5th term
As per the quartile formula;
Q1 = 2nd term + 0.5(3rd term - 2nd term)
Q1 = 11 + 0.5(13 - 11) = 12
Q1 = 12
Upper Quartile, Q_{3}=[\frac{3(N+1)}{4}]^{th}~item
Q_{3}=[\frac{3(9+1)}{4}]^{th}~item
Q3 = 7.5th item
Q3 = 7th term + 0.5(8th term - 7th term)
Q3 = 35 + 0.5(40 - 35) = 37.5
Q3 = 37.5
Example 2:
Calculate Q1 and Q3 for the data related to the age in years of 99 members in a housing society.
Age (in years) | Number of Members |
---|
10 | 20 |
---|
18 | 5 |
---|
25 | 10 |
---|
35 | 30 |
---|
40 | 20 |
---|
45 | 14 |
---|
Solution:
Age (in years) | Number of Members | Cumulative Frequency |
---|
10 | 20 | 20 |
---|
18 | 5 | 25 |
---|
25 | 10 | 35 |
---|
35 | 30 | 65 |
---|
40 | 20 | 85 |
---|
45 | 14 | 99 |
---|
Q_{1}=[\frac{N+1}{4}]^{th}~item
Q_{1}=[\frac{99+1}{4}]^{th}~item
Q1 = 25th item
Now, the 25th item falls under the cumulative frequency of 25 and the age against this cf value is 18.
Q1 = 18 years
Q_{3}=[\frac{3(N+1)}{4}]^{th}~item
Q_{3}=[\frac{3(99+1)}{4}]^{th}~item
Q3 = 75th item
Now, the 75th item falls under the cumulative frequency of 85 and the age against this cf value is 40.
Q3 = 40 years
Example 3:
Determine the quartiles Q1 and Q3 for the company's salaries listed below.
Salaries(per day in ₹) | Number of Employees |
---|
500 - 600 | 10 |
---|
600 - 700 | 12 |
---|
700 - 800 | 16 |
---|
800 - 900 | 14 |
---|
900 - 1000 | 8 |
---|
Solution:
Salaries(per day in ₹) | Number of Employee | Cumulative Frequency |
---|
500 - 600 | 10 | 10(m1) |
---|
600 - 700 | 12(f1) | 22 |
---|
700 - 800 | 16 | 38(m2) |
---|
800 - 900 | 14(f2) | 52 |
---|
900 - 1000 | 8 | 60 |
---|
Q_{1}~Class=\frac{N}{4}
Q_{1}~Class=\frac{60}{4}
= 15th item
Now, the 15th item falls under the cumulative frequency 22 and the salary against this cf value lies in the group 600-700.
Q_{1}=l_{1}+\frac{\frac{N}{4}-m_{1}}{f_{1}}\times{c_{1}}
Q_{1}=600+\frac{\frac{60}{4}-10}{12}\times{100}
Q1 = ₹641.67
Q_{3}~Class=\frac{3N}{4}
Q_{3}~Class=\frac{180}{4}
Q3 = 45th item
Now, the 45th item falls under the cumulative frequency 52 and the salary against this cf value lies in the group 800-900.
Q_{3}=l_{1}+\frac{\frac{3N}{4}-m_{3}}{f_{3}}\times{c_{3}}
Q_{3}=800+\frac{\frac{180}{4}-38}{14}\times{100}
Q_{3}=800+\frac{7}{14}\times{100}
Q_{3}=800+50
Deciles
The deciles involve dividing a dataset into ten equal parts based on numerical values, each containing 10% of the data.

The three quartiles are:
- D1 (First Decile): 10% of the data.
- D5 (Fifth Decile / Median): 50% of the data fall below this value.
- D9 (Ninth Decile): 90% of the data fall below this value.
The general equation to find the position of the Quartile is:
D_k=\frac{k(N+1)}{10},for \:k=1,2, \dots 9.
When substituting for each, we get,
D_{1}=[\frac{N+1}{10}]^{th}~item
D_{2}=[\frac{2(N+1)}{10}]^{th}~item
\dots
D_{9}=[\frac{9(N+1)}{10}]^{th}~item
where, n is the total number of observations, D1 is First Decile, D2 is Second Decile, ... D9 is Ninth Quartile.
Example 1: Calculate the D1 and D5 from the following weights in a family: 25, 17, 32, 11, 40, 35, 13, 5, and 46.
Solution:
First of all, organise the numbers in ascending order.
5, 11, 13, 17, 25, 32, 35, 40, 46
D_{1}=[\frac{N+1}{10}]^{th}~item
D_{1}=[\frac{9+1}{10}]^{th}~item
D1 = 1st item = 5
D_{5}=[\frac{5(N+1)}{10}]^{th}~item
D_{1}=[\frac{5(9+1)}{10}]^{th}~item
D5 = 5th item = 25
Example 2: Calculate D2 and D6 for the data related to the age (in years) of 99 members in a housing society.
Age (in years) | Number of Members |
---|
10 | 20 |
---|
18 | 5 |
---|
25 | 10 |
---|
35 | 30 |
---|
40 | 20 |
---|
45 | 14 |
---|
Solution:
Age (in years) | Number of Members | Cumulative Frequency |
---|
10 | 20 | 20 |
---|
18 | 5 | 25 |
---|
25 | 10 | 35 |
---|
35 | 30 | 65 |
---|
40 | 20 | 85 |
---|
45 | 14 | 99 |
---|
D_{2}=[\frac{2(N+1)}{10}]^{th}~item
D_{2}=[\frac{2(99+1)}{10}]^{th}~item
D2 = 20th item
Now, the 20th item falls under the cumulative frequency of 25 and the age against this cf value is 18.
D2 = 18 years
Similarly D_{6}=[\frac{6(N+1)}{10}]^{th}~item
D_{6}=[\frac{6(99+1)}{10}]^{th}~item
D6 = 60th item
Now, the 60th item falls under the cumulative frequency of 65 and the age against this cf value is 35.
D6 = 35 years
Example 3: Determine D4 for the company's salary listed below.
Salaries(per day in ₹) | Number of Employees |
---|
500 - 600 | 10 |
---|
600 - 700 | 12 |
---|
700 - 800 | 16 |
---|
800 - 900 | 14 |
---|
900 - 1000 | 8 |
---|
Solution:
Salaries(per day in ₹) | Number of Employee | Cumulative Frequency |
---|
500 - 600 | 10 | 10 |
---|
600 - 700 | 12 | 22(m) |
---|
700 - 800 | 16(f) | 38 |
---|
800 - 900 | 14 | 52 |
---|
900 - 1000 | 8 | 60 |
---|
In case N is an even number, the following formula is used:
D_{4}=[\frac{4N}{10}]^{th}~item
D_{4}=[\frac{4(60)}{10}]^{th}~item
D4 = 24th item
Now, the 24th item falls under the cumulative frequency 22 and the salary against this cf value lies in the group 700-800.
D_{4}=l+\frac{\frac{4(N)}{10}-m}{f}\times{c}
D_{4}=700+\frac{\frac{4(60)}{10}-22}{16}\times{100}
D4 = ₹712.5
Percentiles
Centiles are another term for percentiles. Percentiles divide a dataset into 100 equal parts, with each percentile representing the value below which a certain percentage of the data falls. These percentiles are commonly denoted as P1, P2, P3, ..., P99.1, P2, P3,..P99.
For example:
- P1: 1% of the data is less than or equal to this value.
- P50: 50% of the data is less than or equal to this value (also called the median).
- P90: 90% of the data is less than or equal to this value (commonly used in performance benchmarking).
The Three Quartiles (special percentiles)
- P25: 25th percentile (also known as Q1) – 25% of the data is below this value.
- P50: 50th percentile (also known as Q2 or the median) – 50% of the data is below this value.
- P75: 75th percentile (also known as Q3) – 75% of the data is below this value.
The general equation to find the position of the Quartile is:
P_k=\frac{k(N+1)}{100},for \:k=1,2, \dots 99.
When substituting for each, we get,
P_{1}=[\frac{N+1}{100}]^{th}~item
P_{2}=[\frac{2(N+1)}{100}]^{th}~item
\dots
D_{99}=[\frac{99(N+1)}{100}]^{th}~item
where, n is the total number of observations, P1 is First Percentile, P2 is Second Percentile, ... P99 is Ninety Ninth Percentile.
Example 1: Calculate the P20 and P90 from the following weights in the family: 25, 17, 32, 11, 40, 35, 13, 5, and 46.
Solution:
First of all, organise the numbers in ascending order.
5, 11, 13, 17, 25, 32, 35, 40, 46
P_{20}=[\frac{20(N+1)}{100}]^{th}~item
P_{20}=[\frac{20(9+1)}{100}]^{th}~item
P20 = 2nd item
P20 = 11
P_{90}=[\frac{90(N+1)}{100}]^{th}~item
P_{90}=[\frac{90(9+1)}{100}]^{th}~item
P90 = 9th item
P90 = 40
Example 2: Calculate P10 and P75 for the data related to the age (in years) of 99 members in a housing society.
Age (in years) | Number of Members |
---|
10 | 20 |
---|
18 | 5 |
---|
25 | 10 |
---|
35 | 30 |
---|
40 | 20 |
---|
45 | 14 |
---|
Solution:
P_{10}=[\frac{10(N+1)}{100}]^{th}~item
P_{10}=[\frac{10(99+1)}{100}]^{th}~item
P10 = 10th item
Now, the 10th item falls under the cumulative frequency of 20 and the age against this cf value is 10.
P10 = 10 years
P_{75}=[\frac{75(N+1)}{100}]^{th}~item
P_{75}=[\frac{75(99+1)}{100}]^{th}~item
P75 = 75th item
Now, the 75th item falls under the cumulative frequency of 85 and the age against this cf value is 40.
P75 = 40 years
Example 3: Determine the value of P50 for the company's salary listed below.
Salaries(per day in ₹) | Number of Employees |
---|
500 - 600 | 10 |
---|
600 - 700 | 12 |
---|
700 - 800 | 16 |
---|
800 - 900 | 14 |
---|
900 - 1000 | 8 |
---|
Solution:
Salaries(per day in ₹) | Number of Employee |
---|
500 - 600 | 10 |
---|
600 - 700 | 12 |
---|
700 - 800 | 16 |
---|
800 - 900 | 14 |
---|
900 - 1000 | 8 |
---|
In case N is an even number, the following formula is used:
P_{50}=[\frac{50(N)}{100}]^{th}~item
P_{50}=[\frac{50(60)}{100}]^{th}~item
P50 = 30th item
Now, the 30th item falls under the cumulative frequency 38 and the salary against this cf value lies between 700-800.
P_{50}=l+\frac{\frac{50(N)}{100}-m}{f}\times{c}
P_{50}=700+\frac{\frac{50(60)}{100}-22}{16}\times{100}
P_{50}=700+\frac{30-22}{16}\times{100}
P50 = ₹750
Related Articles
Practice Questions on Quartiles, Deciles, and Percentiles
Question 1: Given the dataset: 5, 7, 8, 12, 15, 16, 18, 20, 22, 25, find the quartiles Q1, Q2, and Q3.
Question 2: Consider the dataset: 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90. Calculate the deciles D3, D5, and D7.
Question 3: Given the dataset: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, determine the 25th percentile P25 and the 75th percentile P75
Question 4: Using the dataset: 1, 4, 7, 8, 10, 12, 14, 15, 18, 20, 22, find the 40th percentile P40 and the 90th percentile P90.
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas (stands for Python Data Analysis) is an open-source software library designed for data manipulation and analysis. Revolves around two primary Data structures: Series (1D) and DataFrame (2D)Built on top of NumPy, efficiently manages large datasets, offering tools for data cleaning, transformat
6 min read
NumPy Tutorial - Python LibraryNumPy is a core Python library for numerical computing, built for handling large arrays and matrices efficiently.ndarray object â Stores homogeneous data in n-dimensional arrays for fast processing.Vectorized operations â Perform element-wise calculations without explicit loops.Broadcasting â Apply
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice