0% found this document useful (0 votes)
79 views45 pages

Grade 11 Scientific Measurement Notes

This document is a lecture note for Grade 11 students at Raya University, focusing on scientific measurement and the scientific method. It explains the importance of measurement in science, the metric system, and outlines the steps of the scientific method, emphasizing the need for systematic observation and experimentation. Additionally, it discusses uncertainty in measurements and how to express it, highlighting the significance of accuracy and precision in scientific work.

Uploaded by

tsehayeasmerom71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views45 pages

Grade 11 Scientific Measurement Notes

This document is a lecture note for Grade 11 students at Raya University, focusing on scientific measurement and the scientific method. It explains the importance of measurement in science, the metric system, and outlines the steps of the scientific method, emphasizing the need for systematic observation and experimentation. Additionally, it discusses uncertainty in measurements and how to express it, highlighting the significance of accuracy and precision in scientific work.

Uploaded by

tsehayeasmerom71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

RAYA UNIVERSITY

College of Natural and Computational Science


Department of Physics

Chapter 1 : Scientific Measurement Lecture Note for


Grade11
Stem Program

Instructor : Siyoum Eyasu Gebray(MSc)


E-maill: Syumeyasu@[Link]
Chapter 1

Scientific Measurement Lecture Note for


Grade11

Measurement and Practical work


1.1 Science of Measurement
What is Science ?
Science is the process of acquiring knowledge. Thus, Science refers to a system of acquiring knowledge.
This system uses observation and experimentation to describe and explain natural phenomena.
“Science is the systematic enterprise of gathering knowledge about the world and organizing and condens-
ing knowledge into testable laws and theories.” The word science comes from the Latin word, "scire,"
meaning "to know." Scientists are like crime scene investigators. They use a process to solve a mystery.
The process they use is called the scientific method.
I Science is a systematic and logical approach to discovering how things in the universe work
I Science is both a body of knowledge and a process
I Science is the knowledge obtained by observing atural events and conditions in order to discover facts
and formulate laws or principles that can be verified or tested.

1.2 Measurement
Scientific measurements must always be represented as a number and a unit. Scientists use many skills
as they investigate the world around them. They make observations by gathering information with their
senses. Some observations are simple. For example, a simple observation would be figuring out the color
or texture of an object. However, if scientists want to know more about a substance, they may need to take
measurements. Measurement is perhaps one of the most fundamental concepts in science. Without the
ability to measure, it would be difficult for scientists to conduct experiments or form theories. Not only is
measurement important in science and the chemical industry, it is also essential in farming, engineering,
construction, manufacturing, commerce, and numerous other occupations and activities. The word “mea-
surement” comes from the Greek word “metron,” which means “limited proportion.” Measurement
is a technique in which properties of an object are determined by comparing them to a standard. Measure-
ments require tools and provide scientists with a quantity. A quantity describes how much of something
there is or how many there are. A good example of measurement is using a ruler to find the length of an
object. The object is whatever you are measuring, the property you are trying to determine is the object’s
length, and the standard you are comparing the object’s length to is the ruler. In general, scientists use a

i
system of measurement still commonly referred to as the “metric system.” The metric system was devel-
oped in France in the 1790s and was the first standardized system of measurement. Before that time, people
used a variety of measurement systems. In 1960, the metric system was revised, simplified, and renamed
the Système International d’Unites (International System of Units) or SI system (meters, kilograms, etc.).
This system is the standard form of measurement in almost every country around the world, except for the
United States, which uses the U.S. customary units system (inches, quarts, etc.). The SI system is, however,
the standard system used by scientists worldwide, including those in the United States. There are several
properties of matter that scientists need to measure, but the most common properties are length and mass.
Length is a measure of how long an object is, and mass is a measure of how much matter is in an object.
Mass and length are classified as base units, meaning that they are independent of all other units. In the SI
system, each unit of measure has a base unit.
I A measurement is collection of quantitative or numerical data that describes a property of an ob-
ject or event.
I Measurement is the process of obtaining the magnitude of quantity relative to an agreed standard.
I Measurement is the comparison of unknown quantity with a known fixed quantity or measurement
is the assignment of number to a characteristics of an object or event, which can be compared with
other objectives or events. Thus, the comparison of any physical quantity with its standard unit is
called measurement.

Figure 1.1: The seven base units of the SI system are listed in the table below.

ii
Figure 1.2: Other SI units have been derived from the seven base units. The table below lists some common
derived units.

Some things scientists want to measure may be very large or very small. The SI, or metric, system is
based on the principle that all quantities of a measured property have the same units, allowing scientists to
easily convert large and small numbers. To work with such large or small numbers, scientists use metric
prefixes. Prefixes can be added to base units and make the value of the unit larger or smaller. For example,
all masses are measured in grams, but adding prefixes, such as milli- or kilo-, alters the amount. Measuring
a human’s mass in grams would not make much sense because the measurement would be such a large
number. Instead, scientists use kilograms because it is easier to write and say that a human has a mass of
90 kilograms than a mass of 90,000 grams. Likewise, one kilometer is 1,000 meters, while one millimeter
is 0.001 meters. The table below lists some common prefixes and the quantities they represent.

New scientific instruments have allowed scientists to measure even smaller and larger amounts. There-
fore, additional prefixes have been added over the years, such as femto- (10−15 ) and exa- (1018 ).

iii
1.3 The Scientific Method
What is the Scientific Method?
The scientific method is a series of steps that is used to answer a question or solve a problem. It is a problem
solving process. The set of procedures by which scientists learn about the world is known as the scientific
method.
I The scientific method is a way to ask and answer scientific questions by making observations and doing
experiments.
I The scientific method is a systematic way of learning about the world around us and answering questions.
I Scientific Method: a logical problem-solving approach used by scientists to answer a scientific question.
I The scientific method is a process that is used to find answers to questions about the world around us.
I Scientific method is a mathematical and experimental technique employed in sciences.
I The Scientific method is a process with the help of which scientists try to investigate, verify, or construct
an accelerate and reliable version of any natural phenomena.

1.3.1 Steps of the Scientific Method


The number of steps can vary from one description to another (which mainly happens when data and anal-
ysis are separated into separate steps), however, this is a fairly standard list of the seven scientific method
steps that you are expected to know for any science class:
1. Ask a Questions/Identifying a problem
The scientific method starts with identifying a problem and forming a question that can be tested. A scien-
tific question can be answered by making observations with your five senses and gathering evidence. The
question you ask needs to be something you can measure, so you can compare results you are interested
in. For example, “How does fertilizer affect plant growth?” would be a testable scientific question. It’s
important to do background research to find out what’s already written about your question before starting
your experiment.

2. Do Background Research
Conduct background research. Write down your sources so you can cite your references. In the modern
era, a lot of your research may be conducted online. Scroll to the bottom of articles to check the references.
Even if you can’t access the full text of a published article, you can usually view the abstract to see the
summary of other experiments. Interview experts on a topic. The more you know about a subject, the easier
it will be to conduct your investigation.
3. Form a Hypothesis
The third step in the scientific method is to form a hypothesis. A hypothesis is a possible explanation for
a set of observations or an answer to a scientific question. A hypothesis must be testable and measurable.
This means that researchers must be able to carry out investigations and gather evidence that will either
support or disprove the hypothesis. Many trials will be needed before a hypothesis can be accepted as true.
A hypothesis is written as an “If... then...” statement. For example, “If I give my plants fertilizer in the
spring, then they will produce more flowers,” is a simple hypothesis about how plants grow. In this exam-
ple, you can measure the number of flowers. Using your background research and current knowledge, make
an educated guess that answers your question. Your hypothesis should be a simple statement that expresses
what you think will happen. An explanation that is based on prior scientific research or observations and
that can be tested. “Educated Guess” (based upon your research) or ducated guess (a prediction based on
research that is testable)
4. Design an Experiment
The next step in the scientific method is to test the hypothesis by designing an experiment. This includes
creating a list of materials and a procedure or a step-by-step explanation of how to conduct the experiment.

iv
Scientists must be careful in how they design an experiment to make sure that it tests exactly what the hy-
pothesis states. A proper experiment compares two or more things but changes only one variable or factors
that change in an experiment. This type of experiment is called a controlled experiment. For example, when
testing the affects of fertilizer on plants, you would test an experimental group (with fertilizer) and a control
group (without fertilizer). Then you would compare the results of the groups.
4. Perform an Experiment
Keeping detailed, accurate records is an important part of the scientific method. Before you begin your
experiment, create a table in which to record your data. Data are the facts, figures, and other evidence
gathered through observations. A data table provides you with an organized way to collect and record your
observations. For example, your data table should list the independent variable (amount of fertilizer) in the
first column and the dependent variable (number of flowers) in the second column. Then you can use your
table to create a graph. Graphs help you understand and use that data. Graphs make it easy to identify
trends and make predictions. The x-axis of your graph represents the independent variable, while the y-axis
of your graph represents the dependent variable. An experiment tests the hypothesis through a series of
controlled trials and sample collection.
5. Analysis the Data
The next step in the scientific method is to analyze the data. Data analysis is the process of interpreting
the meaning of the data we have collected, organized, and displayed in the form of a table or graph. The
process involves looking for patterns—similarities, differences, trends, and other relationships—and think-
ing about what these patterns might mean. The scientist then summarizes their findings and relates them to
their hypothesis. For example, in your analysis of your plant experiment, you would refer to your table or
graph to describe any relationships you observed between the plants with and without fertilizer.
6. Conclusion
The conclusion is a summary of the research and the results of the experiment. This is where you answer
your research question. You make a statement of whether your data supported your hypothesis or not. You
may have data that supported part of your hypothesis and not another part. You may also have data that
did not support your hypothesis at all. In this case, you may explain why the results were different. A
conclusion is a statement based on experimental measurements and observations. It includes a summary of
the results, whether or not the hypothesis was supported, the significance of the study, and future research.
Conclusion: Scientists compare the results from the experiment to the original hypothesis. This is what a
scientist learns from the experiment. A conclusion is a summary of what you have learned from an experi-
ment. In drawing your conclusion, you should ask yourself whether the data supports your hypothesis.
For example, if you found that your experimental group produced 40 flowers and your control group pro-
duced 20 flowers, you could draw the conclusion that the fertilizer increased the number of flowers produced
and your hypothesis is correct. If your results do not support your prediction, then perhaps your hypothesis
was wrong. There is nothing wrong with that, you just go back and form a different hypothesis. This pro-
cess continues and it may take years to come up with a correct hypothesis!
7. Communicate the Results( findings)
The last step of the scientific method is to communicate the results. Communicating the results of your
study to others in the scientific community and/or public is an important final step in the scientific process.
You can communicate your findings in reports, scientific journals, conference presentations, social media
posts, popular press articles, podcasts, etc. How you disseminate this new knowledge will depend on your
target audience(s).
I The scientific method is a way to ask and answer scientific questions by making observations and
doing experiments.
The basic Steps in the Scientific Method are:
1. Ask a Question
2. Do Background Research
3. Construct a Hypothesis

v
4. Test Your Hypothesis by Doing an Experiment
5. Analyze Your Data
6. Draw a Conclusion
7. Communicate Your Results

1.3.2 Overview of the Scientific Method


The scientific method is a process for experimentation that is used to explore observations and answer
questions. Scientists use the scientific method to search for cause and effect relationships in nature. In other
words, they design an experiment so that changes to one item cause something else to vary in a predictable
way. Just as it does for a professional scientist, the scientific method will help you to focus your science
fair project question, construct a hypothesis, design, execute, and evaluate your experiment.

vi
1.4 Uncertainty in Measurement and Significant Digits
1.4.1 Uncertainty and Measurement
All measurements always have some uncertainty. We refer to the uncertainty as the error in the measure-
ment. Measurements are always uncertain, but it was always hoped that by designing a better and better
experiment we can improve the uncertainty without limits. It turned out not to be the case. No measure-
ment of a physical quantity can be entirely accurate. It is important to know, therefore, just how much the
measured value is likely to deviate from the unknown, true, value of the quantity. The art of estimating
these deviations should probably be called uncertainty analysis, but for historical reasons is referred to as
error analysis. This document contains brief discussions about how errors are reported, the kinds of errors
that can occur, how to estimate random errors, and how to carry error estimates into calculated results.
Uncertainty gives the range of possible values of the measure and, which covers the true value of the mea-
sure. Thus uncertainty characterizes the spread of measurement results. The interval of possible values of
measure and is commonly accompanied with the confidence level. Therefore, the uncertainty also indicates
a doubt about how well the result of the measurement presents the value of the quantity being measured.
Uncertainty is a quantification of the doubt about the measurement result.
Uncertainty of measurement “A parameter, associated with the result of a measurement, that characterises
the dispersion of the values that could reasonably be attributed to the measurand”
Example: True value of thickness of a book is 5cm. Student A uses meter ruler and measures the thickness
to be 4.9cm with an uncertainty of 0.1cm. Student B , with Vernier caliper, found it to be 4.85cm with an
uncertainty of 0.01cm. We may say, Student A has more accurate value, but less precise. Student B got a
more precise value, but less accurate (due to the faulty caliper. Un-calibrate !) However, after sending the
caliper to be calibrated, student B performs the measurement again and found the thickness is 4.98cm. So,
now he has more accurate and more precise value. Note: We always report a measurement in a way that
would includes the uncertainty carried by the instrument. For instance: (4. 98 ± 0. 01)cm
There are 2 way to express uncertainty:
1. Estimated uncertainty
Estimated uncertainty is written with a ± sign. If a ruler has precision of 0.1 cm, an object has length of
8.8 cm is written as 8.8 ± 0.1 cm.
2. Percent uncertainty
Uncertainty can be expressed as a percent, this is known as the percent uncertainty. The percent uncertainty
is the ratio of the uncertainty to the measured value and multiplied by 100. This is used to indicate the
0.1
precision of the measured value in terms of a percent. × 100 ≈ 1%
8.8
For example, the best estimate of a length L is 2.59cm, but due to uncertainty, the length might be as small
as 2.57cm or as large as 2.61cm. L can be expressed with its uncertainty in two different ways:
Expressed in the units of the measured quantity: L = 2.59 ± 0.02 cm
0.02
Expressed as a percentage which is independent of the units Above, since ± 1%
2.59
we would write L = 7.7 cm ± 1%

1.4.2 Uncertainty for a Single Measurement


An uncertainty can be established in a single measurement by estimation alone, such as measuring using a
meter stick. Measurements such as these are easily repeatable. This method is not always the best choice
if there is a high degree of random error, such as using a stopwatch for recording the time of an event
that is difficult to repeat. Many instruments will have an uncertainty value given by the manufacturer. For
other tools, a useful rule of thumb gives that the uncertainty of a measurement tool is half of the smallest
increment that tool can measure. The following general rules of thumb are often used to determine the

vii
uncertainty in a single measurement when using a scale or digital measuring device.
1. Uncertainty in a Scale Measuring Device is equal to the smallest increment divided by 2.
Smallest increment
σx =
2
1mm
Example: Meter Stick (scale device). Uncertainty = σx = = 0.5cm = 0.05mm
2
2. Uncertainty in a Digital Measuring Device is equal to the smallest increment.

σx = Smallest increment

Ex. Digital Balance (digital device), m = 5.7513kg. Uncertainty = σx = increment = 0.0001kg


For example, if we measure a length of 5.7 cm with a meter stick, this implies that the length can be
anywhere in the range 5.65 cm ≤ L ≤5.75 cm. Thus, L = 5 .7 cm measured with a meter stick implies an
uncertainty of 0.05 cm. A common rule of thumb is to take one-half the unit of the last decimal place in a
measurement to obtain the uncertainty. Or the increment is 5.75-7.65 = 0.10cm, then, the uncertainty or σx
Smallest increment 0.10cm
in scale measurement is σx = = = 0.05cm
2 2

1.4.3 Report of Measurements


How to report the measurements?
While we never know the true value exactly, we attempt to find its best estimate. When you take a single
measurement (individual trial) the number you record (reading) is your best estimate of that trial. When we
do multiple trials, the average value of the trials is our best estimate of the measurement. How do we report
our findings? The most common way to show the range of values that we believe includes the true value is:

1. Uncertainty for a Single Measurement (1 trial)


Absolute uncertainty is Instrument uncertainty or readability error
Analog instrument : 1/2 of the smallest increment (precision)
Digital instrument : the smallest scale division
measurement = (reading ± absolute uncertainty) unit, L = (10.66 ± 0.05) cm and (length is anywhere btw
10.61 & 10.71 cm)
2. Calculating uncertainty range from several repeated measurements

viii
Don’t forget that each individual measurement has readability uncertainty

In general, any measurement can be stated in the following preferred form:


Measured value of a physical quantity X:
Measurement = X = Xbest ± σx ,

ix
where, Xbest = best estimate of measurement, σx = uncertainty (error) in measurement

1.5 Significant figures


A number is an expression of a quantity. A figure of digit denotes any one of the ten numerals (0,1,2,3,4,5,6,7,8,9).
A digit alone or in combination, serves to express a number. A simple way of indicating the probable un-
certainty associated with an experimental measurement is to round the result.
Calculating uncertainties in calculations involving measurements (error propagation) can sometimes be time
consuming. A quicker and approximate method that is used to determine the number of significant figures
in a calculation is to use a couple rules.

x
DEF: A significant figure is a reliably known digit.
I Because zeros serve as counters and to set the decimal point, they present a problem when determining
significant figures in a number.
A Significant Figure is a digit which denotes the amount of quantity in the place in which it stands. The
digit zero is significant figure except when it is the first figure in a number.
Significant figure is defined as the number of digits used to represent a measured value. The more the
number of significant figures, the more precise is the quantity. The number of significant figures in a value
can be defined as all the digits between and including the first non-zero digit from the left, through the last
digit. The number of significant figures is the number of reliably known digits in a number. In the measured
value of a physical quantity, the number of digits about the correctness of which we are sure plus the next
doubtful digit, are called the significant figures. All digits in a measured quantity (mass, length, volume,
temperature etc), including the uncertain one, are called significant figures. The numbers recorded in a
measurement are called significant figures.
Significant digits are all certain digits and one uncertain digit.
The significant figures (sometimes called significant digits) of a measured value include all the numbers
that can be read directly from the instrument scale, plus one doubtful or estimated number—the fractional
part of the least count smallest division. The number of significant figures in a result is simply the number
of figures that are known with some degree of reliability.

1.5.1 Rules for Significant Figures


A. Rules for Determining Significant Figures in a Number
1. All non-zero numbers(digits) (1, 2, 3, 4, 5, 6, 7, 8, 9) are always significant.
i. 457 cm (3 S.F)
ii. 0.25 g (2 S.F)
2. Zero between nonzero digits are significant:
i. 1005 kg (4 S.F)
ii. 1.03 cm (3 S.F)
iii. 40500 (3 S.F)
3. Zero to left of the first nonzero digits in a number are not significant:
1) 0.02 g (1 S.F)
2) 0.0026 cm (2 S.F)
4. When a number ends in zeros that are to the right of the decimal point, they are significant:
1) 0.0200 (3 S.F)
2) 3.0 cm (2 S.F)

Zeros at the end of a number that does not include a decimal point are not significant.
22000 (2 sig figs); 400 (1 sig fig)

All zeros to the right of a non-zero digit in the decimal part are significant, e.g., 1.4750 has 5 significant
figures. When a number ends in zeroes that are not to the right of a decimal point, the zeroes are not
necessarily significant: 190 miles may be 2 or 3 significant figures, 50,600 calories may be 3, 4, or 5
significant figures. The potential ambiguity in the last rule can be avoided by the use of standard exponential,
or ”scientific,” notation. For example, depending on whether 3, 4, or 5 significant figures is correct, we could
write 50,6000 calories as:

xi
5.06 ×104 calories (3 significant figures)
5.060 ×104 calories (4 significant figures), or
5.0600 ×104 calories (5 significant figures).
5. Exact numbers have an infinite number of sig figs.

Example: 1 dozen is 12, the number 12 is exact.

6. All digits in the coefficient of a number written in proper scientific notation are significant.

1.030 x 104 g (4 sig figs); 1.03 x 104 g (3 sig figs)

B. Significant Figures in Algebric Operations


When two or more measured values are added, subtracted, multiplying or dividing the final calculated value
must have the same number of decimal places as that measured value which has the least number of decimal
places.
Rule for Adding and Subtracting Significant Figures
When measurements are added or subtracted, the number of decimal places in the final answer should equal
the smallest number of decimal places of any term. Mean the number of decimal places in the result equals
the number of decimal places in the least precise measurement.
Example:1

If L1 = 4.326 m and L2 = 1.50 m


Then, L1 +L2 = (4.326 + 1.50) m = 5.826 m
As L2 has measured upto two decimal places, therefore L1 +L2 = 5.83 m(3 sig figs)

Example:2

9.65 cm + 8.4 cm - 2.89 cm = 15.16 cm

Note that the least precise measure is 8.4cm. Thus, answer must be to nearest tenth of cm even though it
requires 3 significant digits. The appropriate way to write the answer is 15.2cm.
Example 3: In addition and subtraction, the number of significant digits in the result is governed by where
the rightmost digit appears in all the numbers. The result is rounded to the most significant uncertain digit.
Handling of significant figures in addition and subtraction are illustrated with the examples below:

Rule for Multiplying and Dividing Significant Figures


When measurements are multiplied or divided, the number of significant figures in the final answer should
be the same as the term with the lowest number of significant figures.
Example:1
45 N
X= = 6.97015 2
(3.22m) × (2.005m) m

xii
Least significant factor (45) has only two (2) digits so only two are justified in the answer. The appropriate
way to write the answer is P = 7.0 N/m2 .
Example 2: In multiplication or division it is usually acceptable to keep the same number of significant
figures in the product or quotient as are in the least precise factor.

C. Rules of Rounding Off Significant Figures


1. If the digit to be dropped is less than 5, then the preceding digit is left unchanged.
e.g., 1.54 is rounded off to 1.5.
2. If the digit to be dropped is greater than 5, then the preceding digit is raised by one.
e.g., 2.49 is rounded off to 2.5.
3. If the digit to be dropped is 5 followed by digit other than zero, then the preceding digit is raised by one.
e.g., 3.55 is rounded off to 3.6.
4. If the digit to be dropped is 5 or 5 followed by zeros, then the preceding digit is raised by one, if it is odd
and left unchanged if it iseven. e.g., 3.750 is rounded off to 3.8 and 4.650 is rounded off to 4.6. This rule
means that if the digit to be dropped is 5 followed only by zeroes, the result is always rounded to the even
digit. The rationale is to avoid bias in rounding: half of the time we round up, half the time we round down.

1.6 Errors & Types of Errors in Measurement


The word “error” suggests a kind of mistake or blunder. But in science that is not the case. All scientific
measurements have an uncertainty in them. No physical quantity, such as a time or length or mass, etc. can
be measured exactly. “Error” means this uncertainty. These errors are not blunders or mistakes and they
cannot be completely eliminated by simply being very careful. But we try to make these errors as small as
reasonably possible and we try to determine a good estimate of how big they are. We will use “error” and
“uncertainty” interchangeably. Any measurement is approximate because neither the measuring instrument
nor the measuring procedure can be absolutely perfect. If you measure the same thing two different times,
the two measurements probably won‘t be exactly the same. So, in any measurement there is always a certain
difference, called error, between its result and the true value of what you are measuring. Thus, an error is
the difference between an individual measurement and the true value (or accepted reference value) of the
quantity being measured. In scientific settings, error is defined as the difference between the measured value
and the actual value, where the actual value is a known value, sometimes referred to as a standard. Also an
Error of Measurement can be define difference between the actual value of a quantity and the value obtained
by a measurement. The lack in accuracy in the measurement due to the limit of accuracy of the instrument
or due to any other cause is called an error. The term error has two slightly different meanings.
1. Error refers to the difference between a measured value and the “true” or “known” value.
2. Error often denotes the estimated uncertainty in a measurement or experiment.
Error: deviation from the true value of the measured variable.

1.6.1 Sources of Error


Gnerally Sources of error:
1. The experimental design/experimenter
2. Faulty or uncalibrated instruments

xiii
3. Nature’s random behavior
Measurement errors can arise from three possible origins: the measuring device, the measurement proce-
dure, and the measured quantity itself. Usually the largest of these errors will determine the uncertainty in
the data.

1.6.2 Types of Errors (Uncertainties)


No measurement can be made with perfect accuracy, but it is important to find out what the accuracy
actually is and how different errors have entered into the measurement. A study of errors is a first step in
finding ways to reduce them. Such a study also allows us to determine the accuracy of the final test result.
Errors may come from different sources and are usually classified under three main headings as:
1. Gross Errors
2. Systematic Errors
3. Random Errors

1.6.3 Gross Error (Human Error)


Gross errors: largely human errors, among them misreading of instruments, incorrect adjustment and im-
proper application of instruments, and computational mistakes. Errors due to human mistakes in using
instruments, recording observations, and calculating measurement results. Gross errors are caused by ex-
perimenter carelessness or equipment failure. Gross Errors mainly covers the human mistakes in reading
instruments and recording and calculating measurement results.
Not possible to estimate their value mathematically
Gross Errors may be of any amount and then their mathematical analysis is impossible. Then these are
avoided by adopting two means:-
1. Great care is must in reading and recording the data.
2. Two , Three or even more reading should be taken for the quantity under measurement.
Methods of elimination or reduction:
1. Careful attention to detail when making measurements and calculations.
2. Awareness of instrument limitations.
3. Use two or more observers to take critical data.
4. Taking at least three readings or reduce possible occurrences of gross errors.
5. Be properly motivated to the importance of correct results.
Example:1- Due to oversight, The read of Temperature as 31.5 while the actual reading may be 21.5 .
Example:2- Misunderstanding the unit in case of digital devices (21V instead of 21mV).
• A wrong scale may be chosen in analog instruments.
• Transpose of the readings while recording. (24.9 mV instead of 29.4 mV).

1.6.4 Systematic Errors


• Systematic errors: Arise from procedures, instruments, bias or ignorance. Systematic errors bias every
measurement in the same direction, that is, they will cause your measurement to consistently be higher or
lower than the accepted value.
• Systematic (or determinate) errors are instrumental, methodological, or personal mistakes causing "lop-
sided" data, which is consistently deviated in one direction from the true value. Examples of systematic
errors: an instrumental error results when a spectrometer drifts away from calibrated settings; a method-
ological error is created by using the wrong indicator for an acid-base titration; and, a personal error occurs
when an experimenter records only even numbers for the last digit of buret volumes.
• Systematic errors (also called bias errors) are consistent, repeatable errors. For example, suppose the

xiv
first two millimeters of a ruler are broken off, and the user is not aware of it. Everything he or she measures
will be too short by two millimeters – a systematic error.
• A systematic error is a consistent error that can be detected and avoided or corrected.
• Systematic error (determinate error): Built-in, inherent error. It always occurs in the same direction
each time; it is always high or always low. A systematic error can be corrected by proper calibration or
running controls or blanks. E.g. a thermometer consistently gives readings that are 2 °C too low. Large
systematic errors lower the accuracy of a measurement.
• Systematic errors can be identified and eliminated after careful inspection of the experimental methods,
cross-calibration of instruments, and examination of techniques.
Systematic Errors classified into three categories :
1. Instrumental Errors
2. Environmental Errors
3. Observational Errors
 Instrumental Errors
These errors arises due to three main reasons.
1. Due to inherent shortcoming in the instrument.
Example:- If the spring used in permanent magnet instrument has become weak then instrument will always
read high. Errors may caused because of friction , hysteresis , or even gear backlash.
2. Due to misuse of the instruments.
3. Due to Loading effects of instruments.
How to estimate:
1. Compare with more accurate standards
2. Determine if error is constant or a proportional error
Methods of reduction or elimination:
1. Careful calibration of instruments.
2. Inspection of equipment to ensure proper operation.
3. Applying correction factors after finding instrument errors.
4. Use more than one method of measuring a parameter.
 Environmental Errors
These errors are due to conditions external to the measuring Device including conditions in the are sur-
rounding the instrument. These may be effects of Temperature, Pressure, Humidity, Dust, Vibrations or of
external magnetic or electrostatic fields.
How to estimate:
Careful monitoring of changes in the variables. Calculating expected changes.
Methods of reduction or elimination:
1. Hermetically seal equipment and components under test.
2. Maintain constant temperature and humidity by air conditioning.
3. Shield components and equipment against stray magnetic fields. 4. Use of equipment that is not greatly
effected by the environmental changes.
 Observational Errors
There are many sources of observational errors:-
1. Parallax, i.e. Apparent displacement when the line of vision is not normal to the scale.
Parallax Error: Viewing measurement from different angles.
2. Inaccurate estimate of average reading.
3. Wrong scale reading and wrong recording the data.
4. Incorrect conversion of units between consecutive reading.

xv
1.6.5 Random (Indeterminate) errors:
• Random (Or Indeterminate) errors are caused by uncontrollable fluctuations in variables that affect
experimental results. For example, air fluctuations occurring as students open and close lab doors cause
changes in pressure readings. A sufficient number of measurements result in evenly distributed data scat-
tered around an average value or mean. This positive and negative scattering of data is characteristic of
random errors. Random errors: Uncontrollable differences between measurements because of environ-
ment, equipment, or other sources, no matter how well designed and calibrated the tools are.
• Random errors are unbiased small variations that have both positive and negative values.
• Random errors in experimental measurements are caused by unknown and unpredictable changes in the
experiment occur. These errors can be reduced by increasing the number of readings and using arithmetic
mean. In general, making multiple measurements and averaging can reduce the effect of random errors.
Random error has an equal chance of being positive or negative. It is always present and cannot be cor-
rected.
• Random error (Indeterminate error): A measurement has an equal probability of being too high or too
low. It is due to limitations in an experimenter’s skills or ability to read scientific measurements. It cannot
be corrected.
• Random errors (also called precision errors) are caused by a lack of repeatability in the output of the
measuring system. The most common sign of random errors is scatter in the measured data. For example,
background electrical noise often results in small random errors in the measured output.
• Random errors are unrepeatable, inconsistent errors in the measurements, resulting in scatter in the data.
The random error of one data point is defined as the reading minus the average of readings.
• Random error is always present and cannot be corrected. It has to do with the precision of measurements
in laboratory and is the statistical uncertainty in the last digits of the precision.
Examples: Unknown events that cause small variations in measurements. Quite random and unexplainable.
How to estimate: Take many readings and apply statistical analysis to unexplained variations
Methods of reduction:
1. Careful design of measurement apparatus to reduce unwanted interference.
2. Use of statistical evaluation to determine best true estimate of measurement readings.

1.6.6 Absolute and Relative Errors


The error in measuring instruments can be represented in two ways: Absolute and Relative. Absolute error
and relative error are two types of experimental error. You’ll need to calculate both types of error in science,
so it’s good to understand the difference between them and how to calculate them.

Absolute Error(∆E)
I The absolute error of a measurement is the difference between the measured value and the true value. If
the measurement result is low, the sign is negative; if the measurement result is high, the sign is positive.
I Absolute error is an estimation of the difference between the measured value and the real value(the true
value). It also known as error. Absolute Error = measured value - true value = ∆E or Absolute Error =
Actual Value - Measured Value. It is defined as the difference between the true At and the measured Am
values. Absolute Error = ∆E = measured value - true value = Am - At
Example 1: An ammeter reads 6.7 A and the true value of the current is 6.54 A. The absolute error is ∆E =
Am - At = 6.7 6.54 = 0.16 A
Example 2: If the exact mass of an object is 5.0kg and you estimated mass between 4.8kg and 5.2kg.
mass, m= 5.0kg, Absolute error, ∆m= 0.2kg. Thus the mass, m = 5.0 ± 0.2kg

xvi
Relative Error (er )
The relative error of a measurement is the absolute error divided by the true value. It is defined as the ratio
of the absolute error ∆ e to the true value At of the quantity being measured. You first need to determine
absolute error to calculate relative error. Relative error expresses how large the absolute error is compared
with the total size of the object you are measuring. Relative error is expressed as a fraction or is multiplied
by 100 and expressed as a percent.

The relative error is usually expressed as a percentage of error (%). For example, a driver’s speedometer
says his car is going 60 miles per hour (mph) when it’s actually going 62 mph. The absolute error of his
speedometer is 62 mph - 60 mph = 2 mph. The relative error of the measurement is 2 mph / 60 mph = 0.033
∆e
or 3.3% . Relative Error = er = . Relative error may be expressed in percent, parts per thousand, or
At
parts per million, depending on the magnitude of the result. To express the error as a percentage
∆e
Percentage Error = % er = er ×100 = ×100 = % Error
At
Example 1. The current through a resistor is 2.5 A, but the measurement yields a value of 2.45 A. The
absolute error is ∆e = Am - At = 2.45 2.5 = 0.05A. The relative error, er = ∆ e/ At = -0.05/2.5 = -0.02. The
percentage relative error, % er = er ×100 = -2%
Example 2:

If the exact mass of an object is 5.0kg and you estimated mass between 4.8kg and 5.2kg.
Find the relative error and percentage error. Mass, m= 5.0kg, Absolute error, ∆ m= 0.2kg
∆m 0.2
Relative error = = = 0.04
m 5.0
∆m 0.2
Percentage Error = ×100% = × 100 % = 4%
m 5.0
Example 3:

Find the percentage error of 3.16 ± 0.28 m.


Absolute error 0.28
Percentage Error = ×100 % = ×100 % = 9%
true value 3.16

1.6.7 Zero error


Zero error: The instrument does not read zero when the input is zero. Zero error is a type of bias error
that offsets all measurements taken by the instrument, but can usually be corrected by some kind of zero
offset adjustment. Zero balance is a term used by manufacturers to indicate the maximum expected zero
error of their instrument. Zero error (if the the device doesn’t have a zero or isn’t correctly set to zero). zero
errors errors caused by equipment that has not been correctly zeroed. Zero errors are special examples of
a systematic error. They are caused by an instrument giving a non-zero reading for a true zero value. For
example, the ammeter mentioned above is a type of zero error. When the current is 0 A it would read –0.4
A. Note -: Zero error is non zero reading showsn by instrument when not measuring any object. A zero
error occurs when the measuring instrument is not set on zero accurately.

xvii
1.7 Accuracy and Precision
A measurement isn’t very meaningful without an error estimate! No measurement made is ever exact. The
accuracy (correctness) and precision (number of significant figures) of a measurement are always limited by
apparatus used, skill of the observer and the basic physics in the experiment and the experimental technique
used to access it. Definition accuracy and precision Often the concepts accuracy and precision are used
interchangeably; they are regarded as synonymous. These two terms, however, have an entirely different
meaning. In physics, there are two distinct and independent aspects of measurement related to uncertainties:

1.7.1 Accuracy:
I Accuracy is defined as the degree of closeness of a measured value compared to the true value of the
quantity to be measured.
I Accuracy specifies the proximity of the mean value of a measurement to the true value.
I Accuracy is expressed in terms of either absolute or relative error.
I Accuracy refers to the closeness of a measured value to the true (standard or known) value. It describes
how well we eliminate systematic error. Example: if you measure the weight of a given substance as 3.2
kg, but the actual or known weight is 10 kg, then your measurement is not accurate. In this case, your
measurement is not close to the known value.

1.7.2 Precision:
• Precision is defined as the degree of similarity of repeated measurements and it is refers to the random
spread of the measured values.
•Precision describes the agreement among several results obtained in the same way. Describes the repro-
ducibility of measurements. And also It is a measure of the consistency or repeatability of measurements.
• Precision is a measure of how well a result can be determined (without reference to a theoretical or true
value). It is the degree of consistency and agreement among independent measurements of the same quan-
tity; also the reliability or reproducibility of the result.
• Precision refers to the closeness of two or more measurements to each other without referring to the ‘true’
value. It describes how well we suppress random errors. Example: if you weigh a given substance five
times, and get 3.2 kg each time, then your measurement is very precise.

xviii
The concepts of precision and accuracy are demonstrated by the series of targets below. The target-shooting
example shown in figure 2.2 illustrates the difference. The high accuracy, poor precision situation occurs
when the person hits all the bullets on a target plate on the outer circle and misses the bull’s eye. In the
second case, all bullets hit the bull’s eye and spaced closely enough leading to high accuracy and high pre-
cision. The bullet hits are placed symmetrically with respect to the bull’s eye in the third case but spaced
apart yielding average accuracy but poor precision. In the last example, the bullets hit in a random manner,
hence poor accuracy and poor precision.

Figure 1.3: An illustration of accuracy and precision.

All experiments, no matter how meticulously planned and executed, have some degree of error or un-
certainty. In general chemistry lab, you should learn how to identify, correct, and evaluate sources of error
in an experiment and how to express the accuracy and precision of measurements when collecting data or
reporting results.

1.8 Report Writing


Scientific laboratory reports are used for communicating information to peers, teachers as well as other
scientists within society. It is important that students be competent at writing a scientific report with the
proper format.

1.8.1 Structure of Physics Reports


Laboratory reports may be classified according to whether they are complete reports on a project, short
reports on one or more tests, or short reports on one or more techniques. The structure of laboratory reports
has evolved to serve the needs of the varied readership described in the previous section. The sections
required to complete your Physics lab reports should be written in the order listed below:
1. Title Page
2. Statement of Objective
3. Theory
4. Description of Experimental Setup/List of Equipment Used
5. Procedure
6. Data
7. Analysis of Data
8. Discussion of Results
9. Conclusions

xix
10. References
The content of each of the sections in a laboratory report is described in the following pages. Most of the
descriptions are general enough to be valid for all reports. A few are related to the fact that these reports are
being prepared for a laboratory course
1. Title page
The following information should appear on the title page:
A brief but informative title that describes the report
Your name
Date(s) the experiment was performed
Date the report was due
Names of other group members who were present for the experiments
Laboratory section number
Name of the Teaching Assistant
2. Statement of Objective
State the objective(s) of the experiment concisely, in paragraph form. The laboratory manual or instruction
sheet will help here. The fact that experiments in laboratory courses are being used to educate students is
a secondary objective, and should not be stated in the report. In other words, the objective written in your
report should never be to “familiarize students with the use of equipment.” Rather, the objective should
state the problem that your procedure and data attempts to answer. Some key verbs that you will use in
the objective might include “to investigate,” “to plot,” “to measure,” or “to compare.” The section should
inform the reader precisely why the project was undertaken.
3. Theory
A concise description of the relevant theory should be provided when the theory is needed to understand
other parts of the report, such as the data analysis or discussion sections. This section is sometimes com-
bined with the introduction and background section, if this results in a more readable report. The relevant
equations should be introduced and all the terms to be used in the report should be defined. Equations must
be presented as parts of complete sentences. You will find examples of this later in this guide.
4. Description of Experimental Setup / List of Equipment Used
Provide a neat, correct and clear schematic drawing of the experimental set-up, showing all the intercon-
nections and interrelationships. Include a short textual description that refers to all parts of the schematic
drawing. This section should have all the information needed for a reader to duplicate the setup indepen-
dently. List all the equipment and materials used in the experiment. Include identifying marks (usually
serial numbers) of all equipment. This is a safeguard that allows you to trace faulty equipment at a later
date, if necessary. The reader must be able to connect each item in this section to the item in the Description
of Experimental Setup section.
5. Procedure
Detail the procedure used to carry out the experiment step-by-step. Sufficient information should be pro-
vided to allow the reader to repeat the experiment in an identical manner. Special procedures used to ensure
specific experimental conditions, or to maintain a desired accuracy in the information obtained should be
described. As with all sections of the report, the procedure describes what was done in the lab and should,
therefore, be written in the past tense. Copying the procedure from a lab manual would be an inaccurate
reflection of the work completed in the lab and is not acceptable.
6. Data
All the pertinent raw data obtained during the experiment are presented in this section. This section should
contain only raw information, not results from manipulation of data. If the latter need to be included in the
same table as the raw data in the interests of space or presentation style, the raw data should be identified
clearly as such. The type of data will vary according to the individual experiment and can include numbers,
sketches, images, photographs, etc. All numerical data should be tabulated carefully. Each table, figure and
graph in the report must have a caption or label and a number that is referenced in the written text. Variables

xx
tabulated or plotted should be clearly identified by a symbol or name. Units, if any, should always be clearly
noted.
7. Analysis of Data
This section describes in textual form how the formulaic manipulation of the data was carried out and gives
the equations and procedures used. If more than one equation is used, all equations must carry sequential
identifying numbers that can be referenced elsewhere in the text. The final results of the data analysis are
reported in this section, using figures, graphs, tables or other convenient forms. The end result of the data
analysis should be information, usually in the form of tables, charts, graphs or other figures that can be
used to discuss the outcome of the experiment or project. This section must include statements about the
accuracy of the data, supported where necessary by an error analysis. Sample calculations, details of calcu-
lations, and error analyses should also be included.
8. Discussion of Results
This section is devoted to your interpretation of the outcome of the experiment or project. The informa-
tion from the data analysis is examined and explained. You should describe, analyze and explain (not just
restate) all your results. This section should answer the question “What do the data tell me?” Describe
any logical projections from the outcome, for instance, the need to repeat the experiments or to measure
certain variables differently. Assess the quality and accuracy of your procedure. Compare your results with
expected behavior, if such a comparison is useful or necessary, and explain any unexpected behavior.
9. Conclusions
Base all conclusions on your actual results. Explain the meaning of the experiment and the implications
of your results. Examine the outcome in the light of the stated objectives. This section should answer the
question “So what?” Seek to make conclusions in a broader context in the light of the results.
10. References
Using standard bibliographic format, cite all the published sources you consulted during the conduct of the
experiment and the preparation of your laboratory report. List the author(s), title of paper or book, name of
journal, or publisher as appropriate, page number(s) if appropriate and the date. If a source is included in
the list of references, it must also be referred to at the appropriate place(s) in the report.

xxi
Chapter 2

VECTORS

2.1 Definition of a Vector


A scalar is a quantity that is completely specified by a number and unit. It has magnitude but no direction.
Scalars obey the rules of ordinary algebra. Examples: mass,length,temperature,energy,power,density, time,
volume, speed, etc. A vector is a quantity that is specified by both a magnitude (size) and direction in space.
Vectors obey the laws of vector algebra. Examples are: displacement, velocity, force,impulse, weight,
acceleration, momentum, etc.

2.2 Vector Representation


A. Algebraic Method
Vectors are represented algebraically by a letter (or symbol) with an arrow over its head (Example: velocity
by ~V , momentum by ~P) and the magnitude of a vector is a positive scalar and is written as either by |A| or
A.
B. Geometric Method
When dealing with vectors it is often useful to draw a picture (line with an arrow). Here is how it is done:
1. Vectors are nothing but straight arrows drawn from one point to another.
2. Zero vector is just a vector of zero length - a point.
3. Length of vectors is the magnitude of vectors. The longer the arrow the bigger the magnitude.
4. It is assumed that vectors can be parallel transported around. If you attach beginning of vector ~A to end
of another vector ~B then the vector~A + ~B is a straight arrow from begging of vector ~A to end of vector ~B.
A vector changes if its magnitude or direction or if both magnitude and direction change. We add, subtract
or equate physical quantities of same units and same characters (all the terms on both sides of an equation
must be either scalar or vector). A vector may be multiplied by a pure number or by a scalar. Multiplication
by a pure number merely changes the magnitude of the vector. If the number is negative, the direction is
reversed. When a vector is multiplied by a scalar, the new vector also becomes a different physical quantity.
For example, when velocity, a vector, is multiplied by time, a scalar, we obtain a displacement.

2.3 Types of vectors


2.3.1 Zero Vector
A vector whose initial and terminal points coincide, is called a zero vector (or null vector), and denoted as
~0 . Zero vector can not be assigned a definite direction as it has zero magnitude. Or, alternatively otherwise,

xxii
~ BB
it may be regarded as having any represent the zero vector, direction. The vectors AA, ~ represent the zero
vector.

2.3.2 The Position Vector

xxiii
2.3.3 Unit Vector
A vector whose magnitude is unity (i.e., 1 unit) is called a unit vector. The unit vector in the direction of a
given vector  is denoted by Â.

2.3.4 Collinear vector


Collinear Vectors are those vectors that act either along the same line or along parallel lines. These vectors
may act either in the same direction or in opposite directions. If two collinear vectors A and B act in the
same direction, then the angle between them is 0°, as shown in the figure given below. When vectors act
along the same direction, they are called Parallel Vectors.

xxiv
2.3.5 Co-initial and unit vectors

2.3.6 Coplanar vector


The vectors which lies in the same plane are called coplanar vectors, as shown in Fig. 2.

2.4 The Algebra of Vectors


Vector algebra. The operations of addition, subtraction and multiplication familiar in the algebra of num-
bers or scalars are, with suitable definition. The following definitions are fundamental.
1. Two vectors A and B are equal if they have the same magnitude and direction regardless of the position
of their initial points. (A=B)
2. A vector having direction opposite to that of vector A but having the same magnitude is denoted by -A .

xxv
3. The sum or resultant of vectors A and B is a vector C formed by placing the initial point of B on the
terminal point of A and then joining the initial point of A to the terminal point of B. This sum is written C
= A+B.
4. The difference of vectors A and B, represented by A-B. Equivalently, A-B can be defined as the sum A
+ (-B). If A = B, then A-B is defined as the null or zero vector and is represented by the symbol 0.
5. The product of a vector A by a scalar m is a vector mA with magnitude |m|times the magnitude of A and
with direction the same as or opposite to that of A, according as m is positive or negative. If m=0, mA is
the null vector.
Some rules, results or properties collectively referred to as “Algebra of vectors”. Includes properties like
commutative, associative, and distributive and so on. Some of the properties areas shown.

2.5 Vectors: Composition and Resolution


2.5.1 Adding Vectors: Graphical Method
Graphically vectors can be added by joining their head to tail and in any order their resultant vector is the
vector drawn from the tail of the first vector to the head of the last vector.
Composition is the process of replacing a vector system by its resultant.
The vector addition C~ of two vectors ~A and ~B is denoted by ~A + ~B and is called the sum or resultant of the
two vectors. So: C = ~A + ~B
~
A single vector that is obtained by adding two or more vectors is called resultant vector and it is obtained
using the following three methods

xxvi
A. Triangle Method of Addition

xxvii
B. Polygon Method of vector addition

xxviii
In Figure 1 graphical technique of vector addition is applied to add three vectors. The resultant vector
R = A + B + C is the vector that completes the polygon. In other words, R is the vector drawn from the tail
of the first vector to the tip of the last vector

C. Parallelogram Method of vector addition


The parallelogram law states that the resultant R of two vectors A and B is the diagonal of the parallelogram
for which the two vectors A and B becomes adjacent sides. All three vectors A, B and R are concurrent as
shown in Figure 2. A and B are also called the components of R. The magnitude of the diagonal (resultant
vector) is obtained using cosine law and direction ([Link] angle that the diagonal vector makes with the
sides) is obtained using the sine law. Applying cosine and sine laws for the triangle formed by the two
vectors:

xxix
According to the parallelogram law of vector addition: If two vectors are considered to be the adjacent
sides of a parallelogram, then the resultant of the two vectors is given by the vector that is diagonal passing
through the point of contact of the two vectors. Consider two vectors ~A and ~B inclined at an angle θ as
shown. Let their resultant be ~R .

xxx
2.5.2 Subtracting Vectors : Graphical Method
Vector Subtraction : To subtract two vectors ,A and -B, add the first vector to the negative of the second
vector: AB = A+(B) . The negative of the second vector is obtained by reversing it direction.

2.6 Adding Vectors : Component Method


2.6.1 Components of Vectors and Unit Vectors
Resolution is the process of replacing a single vector by its components.
If a vector (A) lies in the x y plane. The vector (A) may be resolved into two rectangular components. The
component of a Vector parallel to the x-axis is called the Horizontal component (Ax ), and parallel to y-axis
the is called Vertical component (Ay ). Considering Figure 2.1 below, components of the given vector A are
obtained by applying the trigonometric functions of sine and cosine.

Figure 2.1: Components of vector A

xxxi
xxxii
xxxiii
xxxiv
Ex: A car travels 20.0 km due north and then 35.0 km in a direction 60° west of north. Find the
magnitude and direction of the car’s resultant displacement. Soln :

xxxv
2.7 Unit Vector
Unit vector is a vector having unit magnitude, if A is a vector with magnitude A 6= 0, then A/A is a unit
vector having the same direction as A. Any vector A can be represented by a unit vector  in the direction
of A multiplied by the magnitude of A. In symbols, A = A Â. Since the unit vector has unity magnitude |Â|
=1

xxxvi
2.7.1 Vector addition in Unit Vector Notation
Adding vectors that are expressed in unit vector notation is easy in that individual unit vectors appearing in
each of two or more terms can be factored out. The concept is best illustrated by means of an example.

xxxvii
2.7.2 Orthogonal unit vectors

2.7.3 Finding a Unit Vector

xxxviii
Problems

2.8 Multiplication of Vectors


2.8.1 Scalar product or dot product of two vectors

xxxix
xl
xli
2.8.2 Vector product or cross product of two vectors

xlii
2.8.3 Cross product between the pair of unlike unit vectors

xliii
xliv

You might also like