Questionnaires and Schedules Method of Data Collection
Last Updated :
23 Jul, 2025
Questionnaires & Schedule method of data collection :
The questionnaires are the fundamental instrument for gathering information in review research. Fundamentally, it is a bunch of standardized questions, frequently called items, which follow a decent plan to gather individual information around at least one explicit theme.
The schedule is a formalized arrangement of inquiries, proclamations, statements, and spaces for replies given to the enumerators who pose inquiries to the respondents and note down the responses.
Following are the two ways of collecting data through questionnaires:
- Mailing Method
- Enumerator Method
1. Mailing Method:
In this method of data collection, the investigator prepares a questionnaire related to the investigation and mailed it to different individuals with a request to fill it out and send it back. In the questionnaire, the investigator provides the respondents with a black space to write their answers. Also, the investigators usually enclose a self-addressed stamped envelope with the questionnaire for returning it by post. Besides, the respondents are given assurance regarding the secrecy of the information provided by them for the study. Mailing Method is normally adopted by research workers and other official and non-official agencies.
Suitability of Mailing Method
Mailing Method is suitable in the following cases:
- When the field of investigation is very large.
- When the respondents are educated and are likely to cooperate with the investigators and their investigation.

Merits of Mailing Method
Various advantages of the Mailing Method are as follows:
1. Wide Coverage: This strategy is valuable when the field of examination is large, and the respondents are dispersed over a wide region. Besides, this is the only method of collecting data that reaches remote areas.
2. No Role of Questioner: It is liberated from the inclination of the questioner.
3. Time: In this method, respondents have sufficient time to give the required information.
4. Free from Bias: The respondents interpret the questions in their own way; therefore, this method is free from any personal bias of the investigator.
5. Direct Involvement: There is uniformity in the data collected because informants are directly involved in this.
6. Economical: The Mailing Method is economical as it requires less time, effort, and money.
7. Maintains Secrecy: It is suitable when the investigation involved sensitive questions as it maintains the secrecy of the respondent.
Demerits of Mailing Method
Some of the disadvantages of the Mailing Method are as follows:
1. Educated Informants: It cannot be utilized for unskilled or uninformed respondents.
2. Non-reaction: The pace of non-reaction is high as compared with other techniques.
3. Uncertainty: The command over the survey might be lost whenever it is sent.
4. Lack of Clarity: It is challenging to confirm the precision of the responses given.
5. Feasibility: To check the feasibility of the data collected, a pilot study is essential in this technique.
6. Chances of Misinterpretation: Every respondent interprets the question in their own way, which may not be the same as the sense in which the investigator is asking the question, resulting in vague and ambiguous answers.
Precautions for Mailing Method
The investigator should keep the following points in mind while using the mailing method:
1. The questionnaire should be simple, attractive, and short.
2. The questions under this method should not hurt the sentiments and feelings of the informants, and should not be very personal.
3. The questions should be formed with a proper system, sequence, and planning.
4. The investigator should clearly define the object of enquiry.
5. The investigator should make effort to get the information as early as possible.
6. There should be a self-addressed and stamped envelope along with the questionnaire.
2. Enumerator Method:
Under the Enumerator Method, the enumerator takes the questionnaire and personally visits the informants, asks questions from them, and notes down their replies. An enumerator is a trained person who collects information and performs all the field work related to the collection of data. The enumerator also helps the respondents in understanding the true interpretation of the questions and fills up the schedules themselves to avoid ambiguous and vague replies. Besides, to get reliable information from the respondents, an enumerator should be we-trained, tactful, hard-working, and unbiased. This method is generally used by semi-government organisations, governments, research institutions, etc. The questionnaire filled by the investigator is known as a Schedule.
Suitability of Enumerator Method
The Enumerator Method is suitable in the following cases:
- When adequate finance and trained enumerators are available to cover a wide field.
- When the respondents are not literate.

Merits of Enumerator Method
Various advantages of Enumerators Method are as follows:
1. Uneducated Persons: As the enumerator, himself fills the questionnaire, this method can be used in those cases where the target population is not proficient.
2. Assistance: Respondents can address perplexing and troublesome inquiries with the assistance of the enumerators.
3. Less Chance of Bias: It leaves little scope for the poll to be biased.
4. Reliability: The data collected is more reliable and correct.
5. Responsiveness: There is less chance of non-response as the enumerator presonally visits people.
Demerits of Enumerator Method
Some of the disadvantages of Enumerators Method are as follows:
1. Costly: It is a costly method and requires a lot of money.
2. Time-Consuming: It is a very time-consuming process as the enumerator has to personally go and visit the respondents.
3. Skilled Personnel: The outcome of this technique relies upon the accessibility of prepared and skilled enumerators.
4. Personal Bias: The inclination of enumerators could impact the result of data.
5. Affordability: It can only be afforded by big organisations.
Precautions of Enumerator Method
The investigator should keep the following points in mind while using enumerator method:
1. It is necessary for the enumerator to be the person of high integrity and should be properly trained for using statistical tools.
2. The enumerator should be tactful, polite, laborious, and honest to the work assigned to him.
3. It is also important that the informant is properly educated regarding the objective of investigation.
4. The enumerator's work should be timely evaluated.
5. The questions of a schedule should be simple, clear, and small in length.
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice