The Standard Deviation
The Standard Deviation
“I may venture to say that his observations have stretched much farther…”
-- Sense and Sensibility
2.1 INTRODUCTION
It is sometimes said that “physics is an exact science.” But physics is based on experiment,
and experiment on measurement, and measurement is inexact by its very nature. Imperfect (or
simply just finite) instruments, necessary approximations, and inevitable background noise make
all measured values impossible to pin down exactly. Even so, clever experiments that coax even a
very uncertain but crucial measurement from a sea of noise can lead to great advances in physics. A
recent article in a science magazine trumpeted the fact that, with the help of data from the Hubble
Space Telescope, the various scientific groups measuring the Hubble Constant (a quantity whose
value determines the age and fate of the universe) were now getting results that agreed to within
roughly 10% of each other. This was considered great news, partly because it was a dramatic im-
provement over the situation just a few years ago (when measurements disagreed by more than a
factor of two), but partly because the age of the universe computed from this quantity was finally
settling down toward values that were consistent with other physical estimates of the age of the
universe, soothing physicists’ worries that whole areas of physics might be poorly understood.
The point is that, while the greatest precision is always desirable, even inexact measurements
can lead to important scientific progress. But a crucial part of making good scientific use of inexact
results is being able to quantify how inexact (the technical word is uncertain) a measurement is.
We can draw scientific conclusions from an inexact value only if we know something about how
sharp or fuzzy the value is.
One of the major goals of this laboratory program is to teach you how to extract the maxi-
mum possible scientific meaning from uncertain measurement results. The next chapter of this ref-
erence manual will begin discussing the meaning of the concept of uncertainty in the context of
measured quantities. The purpose of this chapter is to lay some mathematical foundations for that
discussion by exploring ways that we might quantify the spread in a set of values that represent
imperfect measurements of the same quantity.
errors), and are not representative of the whole by their very nature. One might imagine one data
set where the values are mostly very closely clustered around some central value with only a hand-
ful of wildly different values at the extremes, and another data set having the same range whose
values are fairly evenly spread out between the extremes. Would we really want to quantify the
“spread” in these two data sets by the same number?
A more meaningful way to quantify the spread is to compute the average deviation of the
data points from the data set’s mean (= average) value. If we use the symbols x1, x2 , x3 , º to
represent the first, the second, the third, etc. measurement values in our set of N values, and the
symbol x to stand for the mean of those values, then the average deviation is defined to be:
average deviation ∫
1
N
[ x1 - x + x2 - x + x3 - x + º + x N - x ] (2.1)
You can see that this definition gives you exactly what the name implies, the average (summed
over the entire data set) of the deviations of the measurements (which for a given measurement is
the absolute value of how far that measurement is from the mean). Note that the absolute value is
essential in this expression: since generally a given measurement is as likely to be above the mean
as below it, if we did not take the absolute value of each difference, they would sum to zero (in fact
the definition of the mean implies that if we remove the absolute value symbols in equation 2.1, the
sum would be exactly zero, as you can check).
The average deviation, in contrast to the range, nicely takes each measurement value equally
into account in its value, and thus is probably a better representation of the spread in a data set. It
also has an easily understood meaning. However, it turns out that we often want to do some calcu-
lus with the quantity we use to characterize the spread in data. The absolute value presents a minor
complication in doing calculus with the average deviation, because the derivative of x is unde-
fined when x = 0.
The standard deviation does not suffer from this problem. If we use the same symbols that
we used in equation 2.1, we can write the standard deviation as follows:
standard deviation ∫ s ∫
1
N -1
[
( x1 - x )2 + ( x2 - x )2 + ( x3 - x )2 + º + ( x N - x )2 ] (2.2)
If we ignore for a moment the fact that we are dividing by N–1 instead of N, the standard deviation
is thus the square root of the average squared deviation. Dividing by N–1 instead of N is im-
portant for deep mathematical reasons beyond our level here, but note that for a data set consisting
of a single measurement, the average deviation states that the “spread” in this data set is zero (sug-
gesting that we know the measurement value perfectly), whereas the standard deviation states that
the spread is undefined ( 0 ∏ 0 ), which more meaningfully suggests that we know nothing about
the spread in a data set when that set only has one value.
Equation 2.2, though it superficially looks more complicated than that for the average devia-
tion, is actually simpler in a number of respects. Doing calculus with this expression is easier be-
cause squares and square roots are easier to deal with mathematically than the absolute value. Since
many calculators are specially set up to calculate the standard deviation with a few button presses,
it is generally much easier to calculate than the average deviation, and is really no more difficult to
calculate even if one is reduced to using equations 2.1 and 2.2 directly. However the main reason
that the scientific community typically uses the standard deviation (rather than the range or the
average deviation) to characterize the spread in the values of a data set is that it has an important
property that the range and average deviation do not have.
2. The Standard Deviation 19
Imagine that we model each of the N measurements in our data set as being what happens
when we add tiny random errors from fairly large number of sources to the measured quantity’s
“true value” (whatever that might be). This “random error” measurement model is only a model of
the measurement process (and a pretty simplified one at that), but it does seem to be useful and rea-
sonably accurate in many cases. When this model does adequately describe a measurement pro-
cess, then we find that if we plot the probability of getting a certain measurement value versus that
value, we get the famous “bell-shaped curve” (which indicates that measurement values close to the
quantity’s “true value” are common and extreme values are rare). A wide variety of measurement
process tend to yield data sets that are distributed this way.
So here is the payoff. If the “random error” measurement model is a good model for the
measurement process giving rise to given data set, then it can be shown mathematically that the
value one gets by computing the standard deviation for this data set is fairly independent of N. This
means that in the context of this “random error” measurement model, the standard deviation better
isolates the effect of the measurement process itself on the spread in the data from effects that re-
sult from including more or fewer data points in the data set. In contrast, the range of a data set
typically increases as N increases (because the more measurements we take, the more likely we
are, by chance, to encounter some really extreme measurement results), while the average devia-
tion tends (more subtly and for more subtle reasons) to decrease with increasing N. It is this spe-
cial relationship between the standard deviation and the most common and useful mathematical
model for the measurement process that makes the standard deviation the most widely accepted
measure of the spread of the values in a data set.
On a TI-82 or TI-83 calculator, select the STAT key, used the arrow keys to get to EDIT if
you aren’t there already, and enter your data in one of the data lists (L1, L2, L3, …). Then press
the STAT key again, use the right-pointing arrow key to choose CALC, the down-arrow key (if
necessary) to get to 1-Var Stats, and press ENTER to get the mean and standard deviation, as well
as some sums used to calculate their values.
You should look in the instruction booklet that came with your calculator to determine how to
calculate the standard deviation on your calculator. (If you don’t have your booklet, do some ex-
periments, ask a friend, or check your calculator manufacturer’s web site. If you come a bit early
to lab, your lab instructor or a lab assistant may be able to help you.) To test that you are actually
calculating the standard deviation as defined by equation 2.2, note that the standard deviation of the
data set 2, 4, 6 should be 2.
EXERCISES
Exercise 2.1
Compute the standard deviation of the following data set using the direct method (the first
method outlined in section 2.3). Show your work in the space below (be sure to keep track of
units!). Then figure out the particular short-cut approach that works on your calculator and use
it to check your work.
0.56 s
0.52 s
0.59 s
0.48 s
0.51 s
Exercise 2.2
Show your interviewer that you can calculate the standard deviation of the data set 2, 4, 6 using
whatever short-cut approach works on your calculator.
3
EXPERIMENTAL UNCERTAINTY
“ ‘I am no matchmaker, as you well know,’ said Lady Russell, ‘being much too
aware of the uncertainty of all human events and calculations.’”
--- Persuasion
cal value. On the other hand, if the uncertainty in your result is ±0.1 s, then it is not very likely
(less than one chance in 20) that the true value behind your measurement is the same as the predict-
ed value, meaning that the theory is probably wrong. What a measurement means, therefore, can
depend crucially on its uncertainty!
In both of the cases described above, these rules are meant represent the minimum possible
uncertainties for a measured value. Other effects might conspire to make measurements more un-
certain than the limits given (as we shall see), but there is nothing that one can do to make the un-
certainties smaller, short of buying a new device with a finer scale or more digits.
Frequency of occurence
graph looking something like the graph
shown in Figure 3.1. Note that the 6
range of measurement values has been 5
divided into “bins”, each 0.02 cm wide,
so, for example, measurement values of 4
8.50 cm and 8.51 cm would both be 3
counted as being in the central bin. (The
2
purpose of grouping values into bins
like this is to show more clearly the 1
characteristic shape of the distribution: a
graph where each bin was only 0.01 cm
8.42
8.44
8.46
8.48
8.50
8.52
8.54
8.56
8.58
8.60
wide would be flatter and less reveal-
ing.) A graph of this type is generally Length of can (cm)
called a histogram.
Figure 3.1: Distribution of 25 measurements
This graph roughly sketches what of the length of a can.
is often called a “bell-shaped curve.” If
we were to plot 100 or 1000 measurements on this graph instead of just 25, the curve would be
more smooth, symmetrical, and bell-like. Measurement values subject to random effects are almost
always distributed in such a pattern. In fact, it is possible to show that a bell-shaped distribution of
values having specific and well-defined characteristics is the mathematical consequence of perturb-
ing effects that are truly random in nature and continuously variable in size. We call the specific
bell-shaped distribution of values caused by such random influences a normal distribution.
Simply by looking at this graph, we can make a rough estimate of the uncertainty of any indi-
vidual measurement value. The definition of “uncertainty” that we have adopted implies that the
uncertainty range should enclose the true value (in this case 8.5129... cm) about 19 out of 20
times. In the case shown above, a range of ±0.06 cm attached to any of the measurements would
include the true value, except for the one case in the rightmost bin. One out of 25 is roughly equal
to one out of 20, so ±0.06 cm would a reasonably good estimate of the uncertainty of a given
measurement in this (hypothetical) case.
s ∫
1
N -1
[
( x1 - x )2 + ( x2 - x )2 + ( x3 - x )2 + º( x N - x )2 ] (3.1)
We also discussed the mathematical fact that s for a population of measurements whose distribution
can be modeled by a normal distribution (in the sense that the phrase is used in the last section)
Let the symbol xi stand for an arbitrary “i-th” measurement in our set. Recall that we have
defined the uncertainty U of any measurement xi to be the value such that we are 95% confident
that the “true value” of the measured quantity lies within the range xi ± U. If we have happened to
take a large number of measurements of this quantity, our otherwise somewhat subjective “95%
confidence” can be given a directly quantitative meaning: the measurement’s true value (which
should correspond to the value at the peak of the bell curve) should lie within the range xi ± U for
3. Experimental Uncertainty 25
95% of the measurements xi . Given a set of measurement values, then, we can use this criterion to
determine the value of U. The only problem is that we need hundreds (if not thousands) of meas-
urements to accurately estimate U this way (to accurately determine U, N must be large enough that
the number of measurements in the 5% that exclude the true value is more than just a handful).
Fortunately, mathematicians have shown that it is possible to accurately estimate the value of U that
would have this property for a very large set of measurements from a much smaller set. The uncer-
tainty U of any given single measurement can be estimated using the standard deviation of a small
set of similar measurements as follows:
U ª ts (uncertainty of a single measurement) (3.2)
where t is the so-called Student t-factor, a number that depends somewhat on N, the number of
measurements in the set used to calculate s. A table of t-values as a function of N is shown below.
While uncertainties are generally accurate
only to one significant digit, this table states val- TABLE OF STUDENT t-VALUES
ues to two or three significant digits to show
clearly the difference between adjacent values. N t-value N t-value
Note that for N > 30, the t-value is within a few 2 12.7 10 2.26
percent of being 2.0: for this reason, some books 3 4.3 12 2.2
will tell you that the 95% confidence range for a 4 3.2 15 2.15
given measurement xi is simply xi ± 2s. Howev- 5 2.8 20 2.09
er, this is not a good estimate of that range for the 6 2.6 30 2.05
small values of N that we will commonly en- 7 2.5 50 2.01
counter. In using the table, you should also keep 8 2.4 100 1.98
in mind that it is really only valid for measure- 9 2.3 • 1.97
ments that are randomly distributed. Specifically,
if the uncertainty of your measurement is limited
by the precision of the apparatus rather than random effects, you should not use equation 3.2: you
should instead use one of the strategies outlined in section 3.3
Please note that equation 3.2 estimates the uncertainty of any given single measurement in the
set. As we’ll see later, though, if we have already bothered to take a set of measurements required
to determine s, we might as well compute the mean of the set, which is a better estimate of the
measurement’s true value (that is, it has a smaller uncertainty) than any arbitrarily chosen single
measurement is.
Note in addition that the table is telling you indirectly something about the number of meas-
urements you need to get a good estimate of the uncertainty. In particular, two measurements are
not enough! Three measurements are a bare minimum, and five is a good compromise between
getting a good estimate of the uncertainty and spending too much time on a single measurement.
Ignore the “Uncertainty of the Mean” entry for now: we will talk about this in Chapter 11.
Note that this program simply does what you already know how to do: it computes the stan-
dard deviation using to equation 2.2 (as you can easily check), and then multiplies by the appro-
priate Student t-factor from Table 3.1 to get the “uncertainty of the data” (as you can also check.)
Note also that this simple program, like your calculator, is too dumb to round the quantities
to the appropriate number of significant digits. Remembering that uncertainties are generally good
to two significant digits at best, you should read the standard deviation as being 1.6 and the uncer-
tainty of the data as being ±4.4.
EXERCISES
Exercise 3.1
A standard household thermometer has one mark for every two °F. What is the minimum un-
certainty that you should assign to the temperature that you read from such a thermometer?
What do you think is the best uncertainty to assign to this reading, do you think larger than
this? Explain your reasoning (there is no absolute right answer to this last question).
Exercise 3.2
According to your digital bedside clock, it took you exactly 12 minutes to dress for class some
morning. What uncertainty should you assign to this result? Explain your reasoning.
3. Experimental Uncertainty 27
Exercise 3.3
Imagine that you are one of ten different people who measure the time of flight of a thrown
baseball. Assume that these ten measured speeds are as listed below and to the left. On the grid
below and to the right, plot a histogram of this data. (Choose a “bin” size that displays this data
as a pseudo bell-curve rather than scattered data or only one or two columns.)
2.53 s 2.58 s
2.67 s 2.63 s
2.59 s 2.60 s
2.62 s 2.56 s
2.66 s 2.61 s
Exercise 3.4
Compute the standard deviation of the data from the previous problem, and estimate the uncer-
tainty of your particular measurement.
Exercise 3.5
If you have access to a copy of RepDat, check that the results it yields for the data in Exercise
3.3 are consistent with the results you calculated by hand in exercise 3.4.
3. Experimental Uncertainty 28
4
PRESENTING DATA GRAPHICALLY
4.1 INTRODUCTION
“Draw a picture!” is an important general principle in explaining things. Frank Churchill’s re-
mark to Emma Woodhouse notwithstanding, “the art of giving pictures in a few words” is not
nearly as useful as a good diagram or graph, because most people process visual information much
more quickly than information in other forms. Graphing your data shows relationships much more
clearly and quickly, both to you and your reader, than presenting the same information in a table.
Typically you use two levels of graphing in the lab. A graph that appears in your final report
is a higher-level graph. Such a graph should be done very neatly, following all the presentation
guidelines listed at the end of this chapter. It's made primarily for the benefit of the person reading
your report.
A lower-level graph is a rough graph you make for your own benefit; they're the ones the
lab assistants will hound you to construct. These lower-level graphs tell you when you need to take
more data or check a data point, since any strange measurements really stand out in a graph.
They're most useful when you make them in time to act in response to what you see. This means
that you should graph your data roughly before you leave the lab so you still have the chance to
make more measurements. (That's one reason we recommend that you leave every other sheet in
your lab notebook free, so you can use that blank sheet to graph your data.) In graphing your data
in the lab, you don't need to be too fussy about taking up the whole page or making the divisions
nice, but you should label the axes and title the graph to remind yourself later what it shows.
Graphing your data right after you have completed a set of measurements also flags regions
in your data range where you should take more data. Typically people take approximately evenly
spaced data points over the entire range of the controllable variable (the “independent” variable),
which is certainly a good way to start. A graph of that “survey” data will tell you if there are re-
gions where you should look more closely: regions where your graph is changing rapidly, going
through a minimum or maximum, or changing curvature, for example. The graph helps you identi-
fy interesting sections where you should get more data, and saves you from taking lots of data in
regions where little is happening.
Graphing each point as you take it, though, is not a good idea. Doing so is inefficient and,
worse, can prejudice you about the value of the next data point. So take five or six data points and
then graph them all.
For example, Figure 4.1 on the next page shows the original data taken on a phenomenon
called mechanical resonance. All you need to know about resonance for our purposes right now is
that the “amplitude” (a measure of the response of an oscillating system) depends on the frequency
at which that system is perturbed (or “driven”) by an external oscillating force.
Notice that the experimenter initially chose driving frequencies in the first run that were ap-
proximately evenly spaced across the range shown in the graph. For this particular apparatus, the
highest and lowest frequencies attainable with the equipment are easy to find, and the experimenter
chose to space the frequencies evenly to get roughly 10 different frequencies over the range in fre-
4. How To Present Data Graphically 30
quencies. You can see from the graph of the original data that the response doesn't change very
much at either very high or very low frequencies, but near some intermediate frequency, between 4
and 6 cycles/s, something strange and interesting happens.
Mechanical Resonance
50
40
Amplitude (cm)
30
20
10
0 1 2 3 4 5 6 7 8 9 10
Frequency (cycles per second)
Figure 4.1: First set of data (black dots) for amplitude response versus driving
frequency in an oscillating system. Notice that the data points are evenly spaced.
Mechanical Resonance
50
40
Amplitude (cm)
30
20
10
0 1 2 3 4 5 6 7 8 9 10
Frequency (cycles per second)
Figure 4.2: Amplitude vs. driving frequency in a resonant system, after adding
more data points (white dots) between 4.0 and 6.0 cycles per second.
4. How To Present Data Graphically 31
The experimenter noticed this, too, and went back to take more data in the interesting range
of frequencies. The frequency spacing used in the second round is smaller than used in the first set
by about a factor of ten, yielding 15 more measurements in the critical region. The result of adding
the second set of measurements is shown in Figure 4.2. As you can see, the shape of the graph is
now much better defined. Furthermore, the new data show that the anomalously high amplitude at
4.5 cycles/s is not a mistake (as one might think considering the other values). The experimenter
could, of course, have taken data with the closer spacing over the entire frequency range, but that
would waste time on measurements at both low and high frequencies where nothing much is hap-
pening. The strategy of taking coarsely spaced data and then backing up to take more data in inter-
esting regions is a good compromise between completeness and efficiency. But remember that you
usually can’t identify the “interesting” regions if you don't graph your data to begin with!
Again, you can call either point ( x1, y1 ) as long as they both lie on the line. Since the y-intercept is
defined as the value of y where a line intersects the y-axis (defined to be the x = 0 line), you can
also read the intercept directly off the graph as long as the graph shows the x = 0 line.
The graph in the second sample lab notebook in section 1.5 illustrates the analysis of a lower-
level graph. Note the use of ¥’s to mark the points used to compute the slope.
1.95
1.90
1.85
0 10 20 30 40
2. Scale your axes to create as large a graph as possible consistent with the constraint that the
divisions on the axes correspond to some nice interval like 1, 2, or 5 (times some power of
10). If you must make the graph smaller than full size to get nice intervals, OK, but check
that you've picked the interval that gives you the largest possible graph (which will display
your data in as much detail as possible). When using log-log or semi-log paper, choose paper
with the number of cycles that gives the largest possible graph (see Chapters 5 and 13).
3. The lower left-hand corner need not be the point (0,0). Choose the range of values for each
axis to be just wide enough to display all the data. If (0,0) does not appear on a hand-drawn
graph, it is customary to mark the break in the axis or axes with two wavy lines (ª).
4. Mark the scale of each axis along each axis for the entire length of the axis.
5. Label both axes, identifying the quantity being plotted on each axis and the units being used.
6. Give each graph a title that summarizes the information contained in the axes and provides
any additional information needed to distinguish this graph from other graphs in the report.
7. Give each graph a number (e.g., "Figure 2"), which you use in the body of the report or
summary to refer quickly to the graph. (You can write such a number on a computer graph.)
8. Draw points and uncertainty bars as discussd in section 4.3.
9. If you calculate the slope and intercept of the graph from two points (rather than using the
method of linear regression described in chapter 10), indicate the two points you used on the
graph. Draw the line through the two points, label it "Best-fit line" (or something similar),
and give its slope and intercept on the graph in some large clear space.
Draw what you think is the best possible line through the data points in the graph you just
created in the last problem, and find the slope and intercept of this line.
4. How To Present Data Graphically 34
5
POWER-LAW FITTING AND LOG-LOG GRAPHS
“She had taken up the idea, she supposed, and made everything bend to it.”
--- Emma
The value of the intercept (which is the value of v = log y when u = log x = 0) is log k , so if we
can find the intercept and its uncertainty, we can find k and its uncertainty.
In summary, we can take any relationship of the form given in equation 5.1, take the loga-
rithm of both sides, and convert it to a linear relationship whose slope and intercept are related to
the unknown values of n and k. This is, therefore, a very powerful way of learning about un-
known power-law relationships (and displaying them). A graph that plots log y versus log x in
order to linearize a power-law relationship is called a log-log graph.
0.8
6 0.6
Period (seconds)
log(period)
4 0.4
2 0.2
0 0
0 0.2 0.4 0.6 0.8 –1.0 –0.8 –0.6 –0.4 –0.2 0
mass (kg) log(mass)
Figure 5.1: Graph of the oscillation period as a Figure 5.2: Graph of the log of the oscillation
function of mass. period as a function of the log of the mass.
We can see that the graph of the period vs. the mass does not yield a very good straight line:
(if the uncertainties are smaller than the dots representing the data points, we would have to say
that a straight line is inconsistent with the data). On the other hand the plot of logT vs. logM is a
very nice straight line, suggesting that the period T and the mass M have a power-law relationship.
What are n and k according to this experiment? We can easily get a quick estimate from the
graph. For the sake of round numbers, consider the points marked with ¥s on the graph above.
The slope of this graph (rise over run) is thus
5. Power-Law Fitting and Log-Log Graphs 37
When we take the antilog of (that is, 10 to the power of) both sides of this, all the items in logs get
multiplied together, so we get (assuming that n is really 1/2):
10 log t 0 ◊ s 10 0.82 ◊ s s
k= n
= 1/2
= 6.6 1/2 (5.9)
kg kg kg
Keeping track of these unit terms when working with logarithms involves a lot of work,
however, and less often pays off the way that keeping track of units in normal equations does.
5. Power-Law Fitting and Log-Log Graphs 38
Therefore, people generally ignore the units associated with logarithmic quantities, and fill in the
units of quantities after taking the antilog (as I did with k in the last section) to make them consist-
ent across the master equation 5.1. But if you ever get confused about units and want to make sure
that things work out correctly, this is how to do it.
30 30 30
y y y
20 20 20
10 10 10
0 0 0
2 4 6 8 10 2 4 6 8 10 2 4 6 8 10
x x x
Figure 5.3: y = x a , a > 1. Figure 5.4: y = x a , a < 1. Figure 5.5: y = x a , a < 0.
5. Power-Law Fitting and Log-Log Graphs 39
EXERCISES
Exercise 5.1
The table to below gives the orbital periods T (in years) of the planets known to Newton as a
function of their mean distance R from the sun in AUs (where 1 AU = the earth’s mean orbital
radius). Plot plot a log-log graph of the period versus the distance on the graph paper provided
as Figure 5.6 on the next page.
Exercise 5.2
Assuming that the period and distance are related by a power-law of the form T = kRn , where
n is an integer or simple fraction, what does your graph suggest is the likely value of n?
Exercise 5.3
Find the value of k (with appropriate units) for the data of Table 5.3 from the intercept of your
log-log graph. Combine this with the result of exercise 5.1 to find the power-law equation (of
the type given in equation 5.1) that seems to fit this data.
5. Power-Law Fitting and Log-Log Graphs 40
Figure 5.6
5. Power-Law Fitting and Log-Log Graphs 41
This format has evolved to answer the general questions a potential reader will ask:
What did you do? (Procedure)
Why did you do it? (Introduction, Theoretical background)
How did you do it? (Procedure, Analysis)
What happened? (Analysis, Conclusions)
The format also provides some shortcuts for busy (or lazy) people. Most scientific prose
tends to be fairly dense, and readers like to find out in a hurry if a paper is actually of interest or
8. How to Write a Lab Report 56
importance to them. The abstract section provides a concise summary of the article and its most
important results, so the reader only has to read a few sentences to determine if the entire article is
relevant. The introduction and conclusions contain a little more information; usually the reader
goes to the introduction for more information about the motivation and the method of the experi-
ment, and the conclusion for more detail on the results summarized in the abstract.
Each of these report sections is discussed in a separate section of this chapter. You will prob-
ably find it helpful to read over the entire chapter the first time you are asked to write a lab-report
section (to get some sense of how the pieces of a lab report fit together). At the end of the semes-
ter, when you will write a full report, you should go back and read the entire chapter again.
is answering this question interesting (and/or important)? In a published journal article, this sec-
tion often begins with a brief summary of previous related research, a statement of a problem that
this research has raised, and a brief description of the experiment in question and how it addresses
the problem. Detailed descriptions are not appropriate in this section; the point is to provide a con-
cise picture of your purposes and a broad survey of your approach. This section should capture the
interest of your readers, provide them with some general orientation, and convince them that what
you are doing is interesting and worth reading about.
After you motivate the experiment, you should give a brief summary of the experimental
method you will use. This need not be extensive; the detailed description goes in the procedure
section, which is separated from the introduction only by the theory section. You need to give
enough information so that a reader who is interested primarily in your method, perhaps to du-
plicate your experiment or apply it to a related problem, can see if that method is appropriate.
tions to print out correctly isn't worth it. Most readers don't instantly recognize “**” or “^” as
meaning exponentiation, either, and they look terrible. You can get "+" by underlining the "+"
sign; don't use "+/-," because it looks terrible, too.
If you do write in equations by hand, don't forget to enter them after you print out your re-
port! Missing equations are a sure tip-off that you forgot to proofread your report. People seem es-
pecially prone to forgetting the Greek letters and special symbols in partially typeset equations,
whereas they usually notice those big blank spaces set aside for entire equations.
(Note: Recent versions of Microsoft Word, WordPerfect, and many other word-processing
programs for both the Mac and Windows operating systems have integrated solid equation editors,
and one can buy good stand-alone equation editors relatively cheaply. Dr. Moore likes MathType,
which is easy to use and can be used with any word processor: see www.mathtype.com. There is
therefore no excuse any more for attempting to typeset equations without an equation editor. Writ-
ing equations in by hand, however, is perfectly acceptable and will not lower your report grade.)
Look at the class text for examples of good style regarding equations: the book was typeset
according to McGraw-Hill’s professional standards for science texts (as described in a long docu-
ment that Dr. Moore has). Note in particular that variables should always be set in italics:
this helps set them apart from the text and identifies them as variables as opposed to just letters.
equipment. The equipment description should state the precision to which measuring devices read.
Anything that isn't a standard device should be described somewhat quantitatively. (For example,
in the pendulum experiment you would give the approximate length of the string, and say someth-
ing that would tell the reader whether to look in the stockroom for lightweight fishing line or big,
hefty twine for wrapping packages.) Identify your lab station by its number if it has one. Large
pieces of equipment should be identified by manufacturer's name, model, and serial number,
which you should have written down in your lab notebook. Giving this information in your report
tells the reader what performance is possible from the equipment you used.
It is very important that you also give the reader a sketch of the apparatus. A good and com-
plete sketch may be able replace a text list of equipment, and if so, it should be used instead.
Sometimes this sketch will be schematic in nature, like a block diagram or a circuit diagram; in that
case, a computer-drawn sketch is fine. In cases where you need to show fine detail, or where it's
important to show the geometry accurately, a carefully hand-drawn sketch is usually better (and
takes much less time to do well). Unless you are very skilled or have very good drawing software,
computer drawings don't normally look enough like the objects they represent to be useful.
The list and/or sketch of the apparatus tell the reader what equipment was available to you,
and to some extent whether you set it up in an appropriate fashion. Next, you tell what you did
with the equipment. You should do this in a logical order, but not be too "step-by-step" about it.
Specifically, avoid a numbered list of steps, which are difficult to read and hence inappropriate ex-
cept for the rare reader who intends to repeat your experiment exactly. At the other extreme, you
should avoid narratives like this: "First we did (whatever), but that didn't work, so then we tried
(something else) to fix the problem with the first measurements." Refine your procedure to remove
these false steps, and present it in enough detail so that the reader can clearly understand what you
did without being overwhelmed by irrelevant tiny details.
If you've made some revision in some seemingly obvious procedure that significantly im-
proves the accuracy of your results, though, make sure you take credit for it. For example: “At the
longest pendulum lengths (L > 1 m), the pendulum frequently hit the wall before completing ten
swings. For those lengths we only timed five swings. This gave satisfactorily consistent results."
You can also refer to the lab manual if its description of the procedure is sufficiently detailed
(many articles in professional journals refer to other papers for details regarding equipment or pro-
cedure), but be especially sure to include a complete description of any procedural details that do
not appear in the lab manual! In referring to a lengthy source like the lab manual, state the author,
title, year of publication, and page number. (For example, a reference to this booklet should look
like: Moore and Zook, Laboratory Reference Manual for Physics 51a, 2001, p. 16.) A reference
to a journal article would state the author, journal name (but not the article title), volume number,
number of the first page, and year of publication. Instructions from the lab instructor or lab assis-
tant can be cited as A. C. Zook, 1999, private communication. (This format is used in journal arti-
cles to refer to a conversation, unpublished letter, or e-mail message from the person cited.)
You might also consider the following questions as you write this section:
1) How did you determine the experimental uncertainties that you chose?
2) What (if anything) did you do to reduce them?
3) Did you experience any difficulties with the apparatus? If so, how did you resolve them?
4) Did you encounter any problems or difficulties in following the lab manual's procedure?
If so, how did you resolve them?
5) Did you modify that procedure in any way, and, if so, how and why?
8. How to Write a Lab Report 61
Standard techniques, such as the correct use of a stopwatch or a vernier caliper, need not be
described in your procedure section. Unless you've given us some reason to be wary of your abili-
ty to use a device that you’ve presumably either used before (for example, the stopwatch) or re-
ceived some instruction about (for example, a caliper) we'll assume that you used it correctly.
One detail you should definitely include, at this stage in your career, is the number of times
you repeated any given measurement. Every year, we’re surprised at the number of students who
don't seem to remember the importance of repeated measurements. Remember that repeating re-
peatable measurements is essentially the only way to determine the measurement’s uncertainty!
Although you will formally calculate the experimental uncertainty in the analysis section, it's good
to mention the uncertainty ranges of your basic, unprocessed measurements in the procedure sec-
tion, or at least state whether a given measurement was repeatable or not.
Finding the appropriate level of detail is difficult. You don’t need to tell the reader every-
thing, but you do have to say enough. The ideal procedure section is one that provides just enough
so that the reader to go into the lab stockroom, pick out the right equipment, repeat the experiment,
and get results consistent with yours based only on the information in your report and the lab man-
ual. Providing just the right amount of detail requires practice, and probably the most aggravating
comments you'll get on your lab reports will be in this section.
8.5.2 Graphing
Your analysis section could more accurately be called your “data presentation and analysis”
section, because the first thing you must do in an analysis section is display the data you are ana-
lyzing. You should not, however, display your original or “raw” data (the numbers you wrote
down in your lab notebook) in tables in your report, because it's very difficult to pick out data
trends from a large table. Instead, you should present your data graphically, plotted on Cartesian
paper. Even this graph (or set of graphs) will probably not simply be a graph of your unprocessed
data: you will more likely plot averages (or means) of sets of data with appropriate uncertainty
bars. (See Chapter 4, Presenting Data Graphically, for details about setting up graphs.)
Just drawing the graph isn't enough, though. You must tell the reader that it exists, what it’s
about, and where it is. A typical first sentence in an analysis section reads something like this: “The
dependence of falling time on distance from the initial position is given in Figure 1.” (Obviously
you should give the dependent and independent variables for the experiment you're actually de-
scribing!) Notice that you have identified the graph both by the data being displayed and by stating
a figure number. Identifying the graph by the data tells the reader why this graph is part of your
logical argument about the meaning of your data and results. Identifying the graph by a number
makes it easy to find, especially if you put all your graphs at the end of your report. If your word
processor lets you display a graph on the same (or at worst the next) page as the text discussing
the graph, then do that; the next best thing is to put all your graphs at the end. Either way, the read-
er knows exactly where to look for them, which is better than having a figure located at the nearest
convenient empty space several pages away. (A word of caution about positioning graphs: you can
use up an enormous amount of time trying to put a graphic in just the right location while keeping
section and page breaks where you want them. If you find your word processor driving you mad
while positioning figures, a particular problem with Word for Windows, put all your figures at the
end. It is OK to do this, really!)
You must, of course, show error bars on your graphs, unless they’re too small to be visible.
If this is the case, say so explicitly so that your reader does not assume that you have simply for-
gotten about them (which could have deleterious effects on your grade). If your error bars are large
enough to be visible, you should also state explicitly whether they represent one standard devia-
tion, the 95% confidence interval, or some other range. (The 95% confidence range is standard.)
8. How to Write a Lab Report 63
The details of your analysis from here depend on exactly what question you are trying to an-
swer with your data. Often in your theory section you have worked out an expected relationship
between the variables that you are measuring. If the expected relationship is linear, you can check
that the data you have graphed are consistent with that prediction. If the expected relationship is not
linear, you will generally have to draw another graph of your data using one of the linearization
techniques described elsewhere (Chapters 5, 12 and 13) to make the expected relation linear. If
this second graph is necessary, refer to it by title and number in your report. It's usually a good
idea to put the linearized plot right after the Cartesian plot, and comment briefly on the relation bet-
ween the Cartesian and non-Cartesian plots in the report. For example, in a write-up of a pendulum
lab, you might say something like this: “The curve in Figure 1 and the predicted L1 / 2 dependence
suggest a power-law relation between pendulum length and period. Figure 2 shows a log-log
graph of the data of Figure 1. The data in Figure 2 lie on a straight line, indicating that period and
pendulum length are in fact related by a power law.”
The result you are after in an experiment is often related to the slope and/or intercept of this
final straight-line graph. Early in the semester you may find the slope and intercept by eyeballing
the best-looking straight line. (You may also use this method later when you want a quick estimate
of the slope.) If you do this, indicate on the graph the two points you used for the slope and inter-
cept calculations, and give the numerical results in your Analysis section. Later on, after you be-
come familiar with a technique known as linear regression (see Chapter 10), you will use that
method, usually with the program called LinReg.
If you did some calculations to extract the value you want from the slope or the intercept of
your final graph, please go through these calculations in enough detail that the reader can duplicate
your work if necessary. If you have to do a series of very similar calculations (and they're more
complicated than dividing by 2 or p), show one such calculation in some detail as an example and
then state that the other calculations are similar.
8.5.4 Results
Earlier we said that you should not give tables of your raw data in your analysis section (or
anywhere else). There are occasions, however, when reporting processed results in tabular form
is appropriate, when a graph is difficult or meaningless. Suppose, for example, that you repeated
the collisions experiment in a number of different ways (using magnetic pucks that repel each oth-
er, using velcro to make the pucks stick together, using pucks with varying masses), and generate
from class data the mean ratio of the total momentum magnitude after the collision to that before the
collision in each case (and its uncertainty). In this case, there is no independent variable to plot
these results against, so a tabular display of these processed results would be appropriate.
At some point you will draw some conclusions about whether the data you have obtained are
consistent with the expected relationship between your variables. If you predicted a straight line in
your theory section and your experimental results support your prediction, you should say so. You
should, however, avoid comments like, “Our results prove that the theory is correct.” You can
never prove a theory; to do so, you would have to perform all possible experimental tests of that
theory, and you don’t have time for that in a three-hour lab period. On the other hand, it is possible
to disprove a theory with a single contradictory measurement (provided that the experiment has
been done correctly, which may be a matter of debate!). The accepted phrase in both cases is less
rashly assertive: "Our results are consistent (or inconsistent) with the theory.
Often your discussion of the implications of your results will be straightforward; if you're
working with a well-known physical system and you follow the treatment in a textbook to develop
a theory, your results will be probably consistent with the theory. We have tried to slip in a few
curve balls just to keep the lab from being “verify what's in the book,” though. Your discussion of
the implications of unexpected results will show your strength as a physicist most clearly. You
should be creative, but also very careful. Don't allow yourself to indulge in empty speculation
about an unexpected result; test your speculations. If you come up with an explanation, try to
show that it could indeed have caused an effect of the same magnitude and in the same direction as
the effect you observed. That is, if your explanation predicts a greater-than-expected measurement,
you'd better observe a greater-than-expected measurement if your explanation is to be valid.
8.6.1 Proofreading
In principle, if you write the various sections of your report using the guidelines above, you
should be done. Before you turn in that masterpiece of scientific prose, though, you need to make
sure that it all hangs together. That is, do the links between sections that you imply in one section
actually appear in another? For example, did you test in your analysis section the equation that you
8. How to Write a Lab Report 65
derived in your theory section? If you made assumptions in your theory section, did you include
tests of those assumptions in your procedure section? Did the measurements you describe in the
procedure appear as graphs in analysis? Do your quantitative results support your discussion and
your conclusions? Is it clear that your theory and your procedure are about the same experiment?
You should really read your report over twice. The first time through is for proofreading, a
step we find people often omit. That word-processed output from the laser printer may look won-
derful at first glance, but it has to stand up to a careful reading. Remember, the computer may not
going to catch your mistakes in punctuation, and the spelling checker will probably not distinguish
between “there” and “their,” or “it’s” and “its.” (Now is a good time for you to make sure that you
know the difference between it’s and its.) It also won’t notice that you’ve left out the equations.
(Indeed, using a spelling checker with technical writing can be pretty annoying, as it chokes on
every technical word, symbol, and equation number.) Our experience with grammar checkers sug-
gests that they are not up to college-level English, so don't slavishly follow every instruction your
grammar checker makes, either. We're not suggesting that you turn your backs on some benefits
of modern computer technology and not use your spelling and grammar checkers at all, but you
should recognize that they have their limitations.
The second reading is for sense and continuity. Do the steps of your procedure follow each
other logically? Is the same true for your analysis? Do the sections of your report relate to each oth-
er as described above? If you can stand it, and if you can get yourself to write your report well
ahead of time (a good intention with which the road to hell is no doubt liberally strewn), get some-
one else (preferably not your lab partner) to read your report. The lab assistants will be prepared to
read over your reports for just such considerations as we've described above.
In spite of what Strunk and White say, however, you should use “inclusive” pronouns rather
than the generic “he.” That is, you should use constructions like, “When physicists make measure-
ments, they ...” rather than, “When a physicist makes a measurement, he ...” Strunk and White
wrote their book before inclusive language became standard. It's almost the 21st century now,
usage changes, and it's time to get with it. (You might also count the members of the lab teaching
staff who are left out by the generic “he” and think of inclusive language as simple self-defense.)
You should also avoid certain words and expressions. “Readings” (as in, “We took five
readings for each distance”) belongs on Star Trek,where it's used to avoid using the technical ter-
minology that a 23rd-century scientist would use, since the screenwriters don't have any idea what
that terminology might be. You’re using 20th-century equipment and a 20th-century vocabulary,
and you can describe exactly what you’re measuring: “For each distance between the source and
the timer, we measured the time interval for the sound wave to travel that distance five times.”
Other words and phrases that people often use incorrectly are:
• Defined as, in the sense of “found to be” or “may be described empirically by.” You can
define the length of a pendulum as “the distance from the pivot to the center of mass of the
bob,” if that is the correct definition, but you find or measure it to be 1 meter long.
• Calculated value, in the sense of “number we calculated from our measurements.” Usu-
ally the calculated value (or the theoretical value) is one you derive from some theoretical
calculation, and the measured value (or the experimentally-determined value) is the one
you calculate from your measurements.
• Approximate for “estimate” (as a verb). Estimates (as nouns) usually are approximations,
in the sense that you typically know them to one significant figure. But you estimate a
number (that is the process), and end up with an approximation (or better, estimate [noun])
of its value.
• Correlation for “simple relation.” Saying that two quantities are “correlated” only means
that they seem to be related in some way, so that if one changes, the other one changes as
well. The relationship between variables in many disciplines of natural and social science can
be extremely complicated, and although we often assume that some underlying cause is re-
sponsible for the relationship, this is often not the case: correlation does not imply causa-
tion. In physics, however, the variables that we generally will look at will be clearly related
by some simple relation. Saying that two quantities are “correlated” in physics is usually too
weak a statement: describe the relationship.
• Calibration. People really like this term, because it sounds so technical. It refers specifi-
cally to the comparison of one measuring instrument either against another or some reference
standard, to make sure the instrument is working correctly. If this is not what you're doing
(and you rarely will do this in this lab program), you are not “calibrating.”
• Prove meaning “support.” We talked about this already, but it's worth repeating. You can't
prove a theory with one experiment, although you can disprove a theory with one. Results
can only support or be consistent with a theory.
• Correct value in the sense of “a value published in a book.” In some experiments, you
might be measuring a value (like the speed of sound) whose value we can look up in a refer-
ence, and you may be tempted to call the value in the book the “correct” value. It is not the
correct value: it is the (currently) accepted value. The values of physical constants published
in books are summaries of experimental results, and new experiments can (and often do) lead
to modifications in these accepted values.
8. How to Write a Lab Report 67
An episode from the history of optics illustrates the last point. Albert Michelson (1852-1931)
was the first American to win the Nobel prize in physics, for his precision measurements in the
field of optics. He invented the Michelson interferometer, used in the famous Michelson-Morley
experiment to demonstrate (unexpectedly!) that the speed of light is the same in all inertial reference
frames. He also made several measurements of the speed of light using a method very similar to
the one you will use later on this semester, although with considerably longer baselines. (One of
his measurements was made between Mt. Wilson and Mount Baldy [no lie!], and Baseline Road in
northern Claremont was surveyed accurately as part of this measurement.) His last measurement,
made in an evacuated tunnel about a mile long (on what was then the Irvine Ranch) was accepted
as the standard for decades, and probably most physicists thought of his result as the “correct”
one. A 1941 review of fundamental physical constants (R.T. Birge, “The General Physical
Constants,” in Reports in Modern Physics,8, 90, 1941) weights this result the most heavily in
coming up with a weighted average of several contemporary measurements of the speed of light.
You can guess what's coming. Later measurements, mostly made in the 1950s, consistently
got results that disagreed with Michelson's. The disagreement wasn't very large, about 17 km/s
(out of 300,000 km/s). Their result and Michelson's differed by more than the sum of the experi-
mental uncertainties, though. Eventually a partial explanation for the discrepancy surfaced. Michel-
son died shortly before the experiment was actually performed, although he did see the apparatus
installed. His collaborators made the measurements (almost 3,000 altogether) at night, to reduce
temperature variations and human activity in the area as sources of experimental uncertainty. The
baseline distance was measured during the day, though, and only two or three times. (It’s difficult
to survey distances of more than a few tens of meters at night.) Apparently the thermal expansion
and contraction of the ground itself with temperature was large enough to have a systematic effect
on the speed of light they deduced from their measurements.
Lest Michelson’s collaborators seem inept, we should mention that they were quite alert to
some even more obscure possible sources of systematic error. In reporting their results, they men-
tioned an apparent weak dependence of the measured speed of light on the tides, but since they
couldn’t identify the cause of this dependence, they couldn’t figure out how to correct for it, or
even whether they should! The cause of this systematic effect is unclear even now. Michelson’s
collaborators and the authors of the review article from which most of this historical summary is
taken, mindful that “correlation doesn’t imply causation,” all hesitated to claim that the tides were
directly responsible for the apparent variation in the measured speed of light. (For more details, see
E.R. Cohen and J.W.M. DuMond, “Fundamental Constants in 1965,” Reviews of Modern
Physics, 37, 537, 1965, and the references therein.)
The moral of this tale, of course, is that there are no “correct” results in science, only accept-
ed ones. Even prominent scientists forget sources of systematic error, or run into systematic error
where do one would have expected it, or someone comes along with better equipment. It is true
that you're not likely to hit the frontiers of physics in an introductory laboratory, but you should
get into the habit now of regarding every scientific result as only one carefully designed experiment
away from revision.
8.6.3 How Long Should a Lab Report Be? (and Stuff to Leave Out)
A typical scientific journal article might be about ten pages long. Your full lab reports will
probably be shorter; try to limit yourself to the equivalent of four or five single-spaced typewritten
pages of text, not counting graphs or diagrams. This means that few of the five major sections (the
ones with Roman numerals on the outline) will exceed a page in length, and some may be shorter.
8. How to Write a Lab Report 68
There are also some items you should leave out of a lab report. Please don't complain about
the equipment; we already know that if we had an infinite budget, we could buy really frictionless
gliders and opto-electronic timers good to a microsecond. You won't have an infinite budget in real
life, either. Even if your equipment budget is large, you will always be making measurements that
require care and ingenuity to make; sometimes the equipment you would like doesn't even exist!
Experimental physics isn't about making really precise measurements so much as it is about mak-
ing the best measurements you can with the equipment you have. By practicing with the admittedly
limited equipment available now, you prepare yourself for those later measurements when you
can't improve the data simply by spending more money.
Don't editorialize about an experiment being a “success” or “failure” in the context of agree-
ment with accepted results or theories. It’s true that we have some expectation that your results will
be in agreement with established laws of physics, because normally you won’t be dealing with
particularly exotic (that is, poorly understood) physics in an introductory course. We also expect
that in the full report, in which you do write a draft for which you have presumably analyzed your
data, that if your results are in gross disagreement with established laws of physics, that you will
make some attempt to figure out the cause of that disagreement and fix it. You do, after all, have
most of that second lab period to collect more data if that should seem appropriate, and that’s ex-
actly why we arranged the lab schedule the way we did. In evaluating your work, though, we look
primarily for evidence that you understood how the equipment worked, how the measurements
you made were related to the theory discussed, and generally that you were thinking about what
you were doing. Some real physical effect could be present that the designers of the lab over-
looked, or have left in to keep you on your toes. (This happens more often than you might think.)
If you have been careful about your work, be confident in presenting what you have observed.
(The confidence should follow from being careful, though, and if the lab staff identify some sys-
tematic effect in data collection that you overlooked, go take more data!)
EXERCISES
Exercise 8.1
Read the lab report entitled “The Speed of Sound.” This report (which describes an older ver-
sion of the experiment you did during the first week of lab) is mostly well-written except for
the abstract and procedure sections. See what you think is lacking in these sections (according
to the checklists and other information in this chapter) and then compare with the comments on
the last page of this chapter. (There is no penalty for not spotting everything: just do the best
that you can.) Write your comments on the report itself.
Exercise 8.2
Read the lab report entitled “Gravitational Potential Energy”. This report is mostly well-written
except for except for the analysis section (where little superscripted numbers indicate problem
areas). See if you can figure out what these numbered problems are, and then check the an-
swers provided on the last page of the chapter. (There is no penalty for not getting everything
just right: just do the best that you can.) Write your guesses in the margin of the report itself.
8. How to Write a Lab Report 69
Torrin Hultgren
Partner: Alix Hui
9/10/98
In this lab we determined the speed of sound by timing the interval that it took for a loud
bang to echo off a surface a known distance away. Our average time interval was 1.28 s, and the
distance was 440 m, so our calculated value for the speed of sound was 343.8 m/s. This is con-
sistent within our experimental uncertainty with the accepted value at 30°C, which is 349.7 m/s.
Introduction:
The speed of sound has many practical applications, such as determining the distance from
lightning, knowing when jets will break the sound barrier, designing acoustical facilities like con-
cert halls and auditoriums, and literally thousands of others. The phenomenon of an echo is famil-
iar to most people, and it is a relatively easy way to measure the speed of sound.
We used two blocks of wood to create a loud and sharp bang. We determined the distance
using a counting wheel whose circumference we measured and we used hand stopwatches to time
the echo. We repeated the time measurement 20 times to reduce experimental uncertainty. We cal-
culated the speed of sound by dividing the distance measurement by the time measurement. In ad-
dition, because the speed of sound varies with the temperature of the air through which it propa-
gates, we measured the temperature with a mercury thermometer in order to calculate the accepted
value for the speed of sound.
Procedure:
We set up on the concrete bench closest to the grass on Marston Quad. We chose this spot
because it lined up with the small wall at the end of Stover Walk (which we could see through the
trees) which gave us an easy reference point for beginning our distance measurement. One of us
held the stopwatch and the other hit the blocks together. Because we could see the blocks coming
together we could anticipate when they would hit. Then we stopped the stopwatch when we heard
the echo, without anticipating it. This gave us a slight delay in timing the echo because of our reac-
tion time, but we were able to correct for this as described below. Both of us made 10 time meas-
urements and hit the blocks together 10 times.
8. How to Write a Lab Report 70
To account for the reaction time delay we devised this procedure. I started both stopwatches
at the same time. I then handed one stopwatch to Alix and kept the other. Behind my back she si-
multaneously stopped her stopwatch and hit one of the blocks against the concrete bench. When I
heard the sound I stopped my stopwatch. The difference between the two times on the stopwatches
was my reaction time. We repeated this measurement for each of us five times.
To calibrate the measuring wheel we put a small piece of tape at the edge of the wheel. We
put the meter stick on the ground and lined this piece of tape up with one of the ends of the meter
stick. We then rolled the measuring wheel along the ground next to the meter stick until the piece
of tape had traveled one full revolution. The point that it lined up with was our value for the cir-
cumference of the wheel.
For the distance measurement we began at the wall at the beginning of Stover Walk that lined
up with the place where we had taken our time measurements. We walked the measuring wheel
down the middle of Stover Walk, using the sidewalk lines to make sure we were traveling in a
straight line and not zigzagging excessively. We continued across the street, and then used the
sidewalk lines to line up perpendicularly so we could move over and roll the measuring wheel
across the wood chips and right up to the face of Carnegie that we believed the sound was echoing
off of. We then doubled this measurement to arrive at the total distance the sound had traveled.
Analysis:
The average of the measurements I took was 1.55 s, with a standard deviation of s = 0.05 s.
The uncertainty of this measurement, using the Student t-value, is
0.10 s
= 0.064 = 6.4% (2)
1.55 s
The similar values for Alix’s measurements, which were different because she had a different reac-
tion time, were 1.43 s ± 0.13 s for a fractional uncertainty of 9.1%. Both of these fractional uncer-
tainties seem reasonable for the type of measurements we were doing. My average reaction time
was 0.27 s ± 0.02 s, and her average reaction time was 0.20 s ± 0.03 s. Our actual calculated
times of flight were therefore 1.28 s ± 0.082 s and 1.23 s ± 0.11 s.
Our measurement for the circumference of the wheel was 0.587 m. Our measurement for the
number of rotations of the wheel was 374.3. The distance from us to Carnegie was therefore
m
374.3 turns ¥ 0.587 = 220 m (3)
turn
Doubling this we arrived at a total distance of flight measurement of 440 m. We generously esti-
mated our uncertainty to be ± 1.0m. This gives us a fractional uncertainty for the distance meas-
urement of 0.2%. Compared to the uncertainty of the time measurement, this is tiny.
8. How to Write a Lab Report 71
440 m
= 344 m/s (4)
1.28 s
Propagating uncertainty using the weakest-link rule, my calculated uncertainty was ± 22 m/s.
Alix's value was 355 m/s ± 32 m/s.
m Ê m ˆ
vs = 331.3 + 0.6 T (5)
s Ë s ◊ C∞ ¯
where T is measured in Celsius degrees. Our measured value for the temperature was 30°C. Plug-
ging this into the above formula gives us an accepted value for the speed of sound of 349.3 m/s.
This value lies well within both of our experimental uncertainties.
Conclusion:
We measured the time it took for an echo to travel a measurable distance. Using our separate time
and mutual distance measurements we calculated two values for the speed of sound: my result was
344 m/s ± 22 m/s and Alix’s was 355 m/s ± 32 m/s. These values for the uncertainty are a reason-
able fractional amount. Our calculated accepted value for the speed of sound based on the observed
temperature was 349.3 m/s. This value lies well within the experimental uncertainty of both our
measurements.
8. How to Write a Lab Report 72
Theory:
The Theory section is missing! This is obviously a simple experiment based on very simple
theory, but at the very least the author should state explicitly that he is assuming that the speed of
sound is constant, and give the appropriate equation for finding the speed from distance and time
measurements.
Procedure:
The equipment list does not include their stopwatch number or the number of the measuring
wheel. Consequently, if they needed to check their calibration of the wheel (or the accuracy of the
stopwatch, which is less likely), they would have no way of identifying it.
The procedure section does provide an equipment list but not a sketch or diagram. However,
this lab is a case where an equipment list is probably more useful than a sketch for helping the
reader understand how the lab works. Even though the guidelines strongly suggest that one should
include a diagram, the guidelines should not be followed slavishly if a diagram does not really add
much to the reader’s understanding. Do whatever makes things most clear to the reader!
It might have been nice to briefly discuss that the author is assuming that the “actual” flight
time of the echo that he will use to calculate the speed of sound is his measured flight time of the
sound minus his reaction time. This is implicit but should be stated more explicitly.
The calibration of the measuring wheel needs more discussion. For example, the piece of
tape mentioned presumably has a finite width, probably about 1 cm. If they weren’t careful to
identify a particular reference point on the tape (such as a penmark on the tape, or one of the two
edges), this would introduce a systematic error into their calibration, which would carry over into a
systematic error in their value for the distance.
The author also doesn’t state the precision of their measurement of the circumference of the
measuring wheel. Without this, the reader has no way of knowing if the later estimate in the uncer-
tainty of the distance is reasonable. It is also unclear if they repeated the circumference measure-
ment or the distance measurement.
Analysis:
The main problem with this section is the uncertainty analysis. To begin with, the author
mentions combining the uncertainties of their average time measurements for the echo time and the
reaction time, but does not identify the method used to combine the uncertainties. Next, no uncer-
tainty estimates are given for either the measurement of the wheel’s circumference or the number of
revolutions of the wheel. Finally, the author invokes the weakest-link rule in finding the uncertain-
ty in the final value for the speed of sound, but does not justify the use of the weakest-link rule by
explicitly locating the weakest link in the calculation and then showing a sample calculation using
that weakest link.
8. How to Write a Lab Report 73
ABSTRACT
In this experiment, we determined the change in the gravitational potential energy V of the
system consisting of the earth and a dropped plastic slab as a function of the distance h through
which the slab falls. We found this change in potential energy to be consistent with the expression
Vi – Vf = mgh , where m is the mass of the object and g is the gravitational field strength. We
found the value of g to be 9.81 ± 0.02 m/s2 , consistent with results obtained in other laboratories.
INTRODUCTION
Consider the change Vi - Vf of the gravitational potential energy of a system consisting of
the earth and a falling object, where Vi is the system’s initial potential energy, Vf is its final poten-
tial energy after the object has fallen a certain distance h. In section C7.4, the text claims that this
change in potential energy is given by Vi - Vf = mgh, where m is the object’s mass and g is the
gravitational field strength near the earth, a constant that is purportedly equal to 9.8 m/s2.
This result, which is stated without justification in the text, is a basic and important result that
subsequently used many times in the text. It would be valuable, therefore, to supply the empirical
foundation for this assertion. Our goals in this experiment were to demonstrate for a specific object
interacting with the earth that (1) for a given value of h, the value of Vi - Vf does appear to be pro-
portional to m, (2) for a given value of m, the value Vi - Vf increases linearly with h, and (3) the
value of g is what it is purported to be.
In this particular experiment we dropped a plastic slab (released from rest at a known initial
height) past a photodetector connected to a computer. A series of equally-spaced opaque bands
painted on the slab interrupted the light falling on the photodetector, and the computer measured the
time that it took each band to pass the photodetector. From this information, we could determine
slab’s speed as each band passed the photodetector, and thus determine its kinetic energy after it
had fallen whatever distance h was required to bring that particular band past the photodetector.
Given the object’s kinetic energy as a function of h, we could find Vi - Vf as a function of h. By
attaching various weights to the bottom of the slab, we could vary the mass of the falling object
and thus check how Vi - Vf depends on mass.
THEORY
As the plastic slab drops under the influence of the gravitational interaction between it and the
earth, the total energy of the earth-slab system must be conserved:
Ki + KE,i + Vi = Kf + KE,f + Vf (1)
where Ki and Kf are the initial and final kinetic energies of the slab, KE,i and KE,f are the initial and
final kinetic energies of the earth, and Vi and Vf are the initial and final gravitational potential ener-
gies of the system. According to the argument presented in section C7.3 of the text, we can con-
8. How to Write a Lab Report 74
sider the earth to be essentially at rest throughout the experiment (since it is so much more massive
than the slab) and thus KE,i and KE,f are negligible. If we drop the slab from rest, then Ki = 0 also,
and equation (1) becomes simply
Vi - Vf = Kf = 1 mv 2
2 f (2)
So, to measure the system’s potential energy change Vi - Vf after the slab has fallen a dis-
tance h, all that we have to is measure the slab’s mass m and its final speed v f . We can easily
measure its mass using a balance. We can measure its final speed as follows. Imagine that we paint
an opaque band across the width of the slab perpendicular to the direction that the slab falls. As the
slab falls, imagine that this band interrupts a horizontal beam of light between a light source and a
detector. We can use a computer to register the time Dt that the beam is interrupted. If the height of
the band is Dd, then the speed of the slab as the band crosses the beam is approximately given by:
v ª Dd/Dt (3)
This most closely approximates the slab’s speed halfway through the time interval and thus rough-
ly as the center of the band passes the light beam. This speed, therefore, can be used to determine
the slab’s kinetic energy after it has fallen a distance h equal to the change in the slab’s position
from its release point to the position where the band is centered on the photocell beam.
Finally, note that the claim is that Vi - Vf = mgh, where m is the slab’s mass and g is the
constant gravitational field strength. If this is true, then plugging this into equation (2) yields
mgh = 1 mv 2
2 f fi gh = 1 v2
2 f (4)
Therefore, if Vi - Vf is proportional to m as claimed, the slab’s final speed after falling through a
given distance h should be completely independent of its mass, which should be easy to check.
Also, if this is true, the slab’s mass is not really relevant and we do not need to measure it.
PROCEDURE
In this experiment, our falling object was a clear plastic slab about 1.1 m tall and 8 cm wide,
with five opaque bands 5.0 cm tall and vertically separated (center to center) by 20 cm. We could
vary the mass of the slab by attaching one to four weights to the bottom of the slab. We dropped
this slab past a photogate consisting of a paired infrared light source and a photodetector mounted
on a lab table so that the line connecting the source and detector was horizontal (perpendicular to
the motion of the slab). The output of the photodetector was connected to a small box which in turn
was connected to a Universal Lab Interface (ULI) circuit board (sold by Vernier Software, Inc.),
which processed the signal for the photogate before passing it on to a Macintosh Centris 610 (serial
number 3255967). A program called ULI Timer (also from Vernier Software) monitored the out-
put from the ULI and displayed time intervals on the computer screen (see Figure 1 for a sketch of
our experimental setup.) The program was configured to display the length of time that each of the
five dark bands on the slab blocked the photogate beam as the slab fell past it.
8. How to Write a Lab Report 75
photogate
(attached to connector box
the desk with 20 cm
a clamp)
Macintosh
ULI 20 cm
table
20 cm
slab
5 cm
FRONT VIEW 20 cm
20 cm
After our lab instructor gave a brief demonstration of the equipment, each of the seven lab
teams in our particular afternoon session did a run. When our turn came, one of us (M-G.M.) held
the center of the upper end of the slab between his thumb and forefinger and adjusted its vertical
position until a certain mark inscribed on the slab edge was aligned precisely in the middle of the
photogate as reported by IC. We waited until the slab had stopped swinging back and forth and
was completely at rest. I.C. then triggered the ULI Timer program to start taking data and M-G.M.
dropped the slab. The computer then automatically recorded and displayed the time Dt that it took
each of the five opaque bands to pass the photogate. We wrote these five numbers on the black-
board, filling in a table already started by other teams.
Once all the data was taken, each pair of lab partners calculated the means and uncertainties
of the mean (using the techniques in chapters 3 and 5 of the lab reference manual) of the results for
Dt for each of the five bands. We discussed the results as a class and decided that these results ap-
peared to be uncertain by very roughly ± 0.002 s.
While we were calculating the means and uncertainties, our pair of partners took turns doing
a total of seven more runs, three runs with two weights attached to the slab and four runs with four
weights attached to the slab. These runs were also recorded on the blackboard.
Finally, each pair worked individually to analyze the data. As we did this, we passed the slab
from pair to pair so that each could check that the opaque bands were 5.0 cm tall and separated
from center to center by 20 cm. We did this using an ordinary meter stick turned on its edge so that
the scale was right next to and perpendicular to the bands. We estimated that the height of the
bands was equal to 5.0 cm to within ± 0.05 cm and that the distances between the centers of the
band was 20.0 cm to within ± 0.1 cm (we actually measured these from bottom edge to bottom
edge).
8. How to Write a Lab Report 76
ANALYSIS
A table of the mean values of the time intervals appears below:1
Dt Dt Dt
Band Number (no added weight)2 (one added weights)2 (four added weights)2
1 0.0252 0.0250 0.0256
2 0.0179 0.0181 0.0179
3 0.0146 0.0149 0.0144
4 0.0126 0.0124 0.0126
5 0.0112 0.0113 0.0110
It is clear from these results that the speed of the slab is independent of its mass3, so (as we argued
in the theory section) Vi - Vf must be4 directly proportional to the slab’s mass m.
From the values of Dt for the slab with no added weight, we calculated 12 v 2f for each of the
heights5. Figure 2 shows a graph of these results. According to LinReg,6 the slope of the line is
9.81023 and the intercept is 0.012865.7 This proves8 that Vi - Vf = mgh (though our value of g is
a bit high due to experimental error).9
CONCLUSION
In this experiment, we showed that the final kinetic energy per unit mass 12 v 2f of a plastic
slab dropped from rest through a distance h is independent of the mass of the slab and seems to be
proportional to h (within experimental uncertainty), with the constant of proportionality being equal
to 9.81 ± 0.03 m/s2 . These results are completely consistent with the assertion made in the text
that Vi - Vf = mgh, where g = 9.8 m/s2 .
8. How to Write a Lab Report 77
10.0
8.0
6.0
4.0
2.0
0
0 0.20 0.40 0.60 0.80 1.00
Figure 2.10
8. How to Write a Lab Report 78
(1) This table is nicely laid out, but a table of processed data like this should state the units of the
quantities and state the uncertainties of the means as well as the means themselves. The writers
should have also included a description of how many measurements went into calculating the mean
and how the uncertainties were calculated. Also, if the uncertainties are really on the order of
±0.002 s (as stated in the procedure section), then the last digit in the tabulated data is totally mean-
ingless. Are the uncertainties really more like ±0.0002 s (this would be consistent with the varia-
tion appearing in the table data)?
(2) One could include the units of the data in the column heading like this: “Dt in seconds”.
(3) Is this really clear? Without knowing the uncertainties, the small variations in the values are
impossible to interpret.
(4) We really can’t say that Vi - Vf must be independent of m, only that our data is consistent
with this interpretation.
(5) This needs to be explained in much more depth. How did the writers calculate 12 v 2f from this
data? What are the uncertainties of these speeds, and how were they calculated (one has to use
something like the weakest link rule)? How were the heights determined and their uncertainties es-
timated? It would have also helped greatly if the writers had listed the calculated values and uncer-
tainties for 12 v 2f and h for each row of the table (or better yet, on a separate table).
(6) A brief description of LinReg and what it does would be appropriate here.
(7) The quantities quoted here have units and uncertainties: what are they? Also what is the signif-
icance or meaning of the slope and the intercept here?
(8) An experiment can never prove that any theoretical assertion is true. The best that we can say
here is that our results are consistent (or inconsistent) with this assertion. See the conclusion for
better language.
(9) How is g is related to something we have calculated in this lab? Also, the value is a bit high
compared to what? Saying that the difference is due to “experimental error” says nothing. What
kind of experimental error? Is the result within our uncertainties or not? If so, what does it mean to
say that this is “a bit high”?
(10) What are the experimental uncertainties of the data points? Are they not shown because they
are too small to appear on the graph or did the writers simply forget about them them? What is be-
ing plotted against what here? (The axes should be labeled.) What is this graph about? (It should
have a title!) What are the units of quantities displayed? What does the line mean? This graph is
missing many of the features that a higher-level graph should have.
9
PROPAGATION OF UNCERTAINTY
Now if the input value a has an uncertainty U[a], then we can wiggle the handle correspond-
ing to the variable a back and forth from its most probable value a by a positive or negative amount
d a in the range d a £ U[a] and still be consistent with the experimental data. This wiggling will
cause the value of f indicated by the dial to wiggle back and forth from its central value by a certain
amount as well. Let us define d fa to be the (presumably small) change in the value of f from its
central when a is moved from its central value by d amax = +U[a] (corresponding to the upper ex-
treme limit of a’s uncertainty range), while the other handles are held constant. Similarly, let d fb
be the change in f when b is moved from its central value by an amount d bmax = + U[b] while the
other variables are held constant, and so on.
Now, what is the uncertainty in f when all of its variables are free to wiggle around within
their uncertainty ranges simultaneously? The maximum distance f could be from its central value is
d fmax = d fa + d fb + d fc + º if all of the input values happen to be simultaneously at whichever
edge of their uncertainty range causes them to shift f in the same direction. But this is fairly unlike-
ly, because there is only roughly a 5% chance that any single variable will be at or beyond either
limit of its uncertainty range; the likelihood that all of the variables are simultaneously at or beyond
their limits on the correct side to push the value of f in the same direction is vanishingly small.
It turns out that because of this, a statistically more accurate estimate of the uncertainty of f
due to the uncertainties in all of its variables is
(The proof is beyond our scope here.) Quantities whose effects are “added” by squaring, adding,
and then taking the square root like this are said to be added in quadrature. Calculating U[f]
thus reduces to the problem of finding the changes d fa , d fb , º due to each variable separately.
This can be done easily in simple cases. Consider the special case where f(a,b) = a – b. If we
increase a to a + d a while keeping b fixed, then f changes to
f + d f = a + da - b fi d f = da fi d fa = d amax = +U[a] (9.2)
after subtracting f = a – b from both sides. Similarly, you can easily see that d fb = -U[b] (nega-
tive because when b goes up, f goes down). Therefore the total uncertainty in f in this case is
Therefore, if a and b have the same uncertainty then the best estimate of the uncertainty in f is not
2U[a] (as one might naively expect) but rather 2U[a] ª 1.4U[a].
On the other hand, if U[a] is more than about 3 times larger than U[b], then (U[a])2 is more
than 9 times larger than U[b], and thus dominate the expression for U[f] in equation 9.3. This is a
specific example of a more general feature of the original equation 9.1: that equation implies that
the uncertainty U[f] is typically dominated by one variable, the variable whose variation over its
uncertainty range causes the greatest change in the value of f.
f(a,b,c, …) = ka m b nc j º (9.4)
where k is a constant and m, n, and j are exponents that may be positive or negative (these expon-
ents are usually integers or simple fractions). A dependence of this form on the variables a, b, c, …
9. Propagation of Uncertainties 81
is called a power-law dependence. For example, an object’s calculated speed v depends on the
distance D it had to travel and the time T that it took to travel that distance according to the power-
law relation v = D / T = D1T –1 (k = 1 here).
If equation 9.4 is true, then the weakest-link rule provides a fast and simple way of esti-
mating the uncertainty U[f] in the calculated quantity f:
The fractional uncertainty Q[f] of f = ka m b nc j º is approximately equal to the largest of
the values m Q[a], n Q[b], j Q[c], and so on.
The fractional uncertainties Q[a], Q[b], Q[c], … of the variables are typically quite different in a
real experiment, and so doing a few rough divisions in your head can quickly guide you to the
variable whose fractional uncertainty is largest.
We will look at why this rule is correct in a moment. First note that this rule says two inter-
esting things. The first is that the “weakness” (that is, the fractional uncertainty) of a calculated
quantity f is determined primarily by the “weakest” of the quantities on which it depends (the
“weakest” here being the quantity whose fractional uncertainty times its exponent is largest). This
is fundamentally because of the effect noted in the last paragraph of the last section: one variable’s
effect typically dominates the sum in equation 9.1. The rule’s name emphasizes this by bringing to
mind the old saying “the strength of a chain is determined by its weakest link.” (Note that contrary
to the currently popular game show, the “weakest link” is the one we keep in our calculation!)
The second interesting thing that the rule says is that it is the fractional uncertainty in f that is
related in a simple way to the fractional uncertainties of its variables. This was not the case when
we are talking about the simple sum or difference of variables (as equation 9.3 shows), but as we
will shortly see, it is the most natural way to deal with power-law relations.
Let’s see how this rule might work in a given situation. Imagine that we are computing the
magnitude of an object’s average velocity vavg = D1T –1 , where D is the distance it travels during a
time T. Say that we have measured D = (12.12 ± 0.02) m and T = (0.82 ± 0.05 s). The best guess
value of vavg = 12.12 m / 0.82 s = 14.8 m/s. The fractional uncertainties in D and T are:
U[ D] 0.02 m 0.05 s
Q[ D] = = = 0.0017, Q[T ] = = 0.061 (9.6)
D 12.12 m 0.82 s
The fractional uncertainty in T is more than 35 times larger than that for D so it dominates. Accord-
ing to the weakest link rule, the fractional uncertainty in vavg = D1T –1 is thus given by
Q[vavg ] ª - 1 Q[T ] = 0.061 fi U[vavg ] = 0.061vavg = 0.061(14.8 m/s) = 0.9 m/s (9.7)
Note that the calculations here are quick and simple: that is the beauty of the weakest-link rule.
The general “proof” of the weakest-link rule is somewhat beyond our mathematical means
here, but let’s see how we might “prove” it in the simple case where f = f ( a, b) = ka 2 b . If b
changes to b + d b while a remains the same, then f changes to
f + d fb = ka 2 (b + d b) = f + ka 2d b fi d fb = ka 2d b (9.8a)
If we now divide both sides of this by f = ka 2 b and set d b = + U[b], we find that
d fb ka 2d b db U[b]
= 2
= = ∫ Q[b] (9.8b)
f ka b b b
If we change a to a + d a while b remains the same, then f changes to f + d fa = k ( a + d a)2 b .
9. Propagation of Uncertainties 82
Writing out the square and subtracting f = ka 2 b from both sides, we get:
Whichever of 2Q[a] or Q[b] is even marginally larger than the other will dominate inside the square
root and thus be essentially equal to Q[f]. Thus we have seen that the weakest link rule does indeed
adequately summarize the more exact calculation in this case as long as (1) the fractional uncertain-
ty in a is fairly small (so that we can ignore the complicating term in equation 9.9b), and (2) one of
2Q[a] or Q[b] is at least somewhat larger than the other.
An approach of last resort is to apply the general method outlined in section 9.2. Cal-
culate (by hand) the variation in f when you vary each variable from its central position to the upper
edge of its uncertainty range while leaving the other variables constant. Then use equation 9.1 to
compute the total uncertainty in f from these individual variations. This will generally be pretty ted-
ious compared to the weakest link method, but does yield reasonably accurate answers in all cases.
This is the method that you must use if f involves a logarithm or exponential (unless you have a
computer program that can do the calculation outlined in the previous paragraph).
Of course, if f involves the simple sum or difference of two variables, one can apply equation
9.3, which we derived especially for the simple difference case. (You should be able to convince
yourself that equation 9.3 also applies to the case of a simple sum.)
process N times (N = 100 by default). Finally, the computer calculates the standard deviation of
the N values of f it has generated and the uncertainty in f from that.
In other words, the computer simulates having N teams of experimenters like yourselves
who have measured the same variables and have used them to calculate values of f. The uncertainty
in the value of f is clearly related to the spread in the values obtained by the N ficticious teams.
Note that in the case shown in the figure, the fractional uncertainty in f is 10%. Note that the
quantity a has by far the largest fractional uncertainty (5% compared to 1% for the other variables),
so the weakest link rule would say that Q[f] ª 2Q[a] = 2·5% = 10%. Thus the program agrees
with the weakest-link rule in this case. If you press the “Evaluate” button again, however, you may
get slightly different results, because of the random nature of the simulation. Choosing larger val-
ues of N will make the calculation more accurate, but could be slow on a old computer.
We actually would rather you use the weakest-link rule whenever you can; you will not al-
ways have PropUnc handy in real life, so it is good to practice using the weakest-link rule, which
is simple and usually gives good results. You may use PropUnc (1) to check a weakest-link calcu-
lation, or (2) whenever the weakest-link rule or equation 9.3 does not apply. PropUnc is installed
on all the lab computers and also may be freely downloaded from the Physics 51 web site.
Example 9.6.2: Imagine that the number of bacteria in a certain colony at a certain time is N =
305,000 ± 15,000. What is the uncertainty in f = ln N ? (You might need to know the uncertainty
of the logarithm if you want to draw an uncertainty bar for this data point on a log-log graph.)
Since f = ln N is not a power-law relation, we cannot use the weakest-link rule. If we can’t
use PropUnc, we can fall back on the general method. In this case, if we change N from its central
value of 305,000 to the upper limit of its uncertainty range which is 320,000, the value of ln N
changes from ln(305,000) = 12.6281 to ln(320,000) = 12.6761, so the change in f due to this
change is d f N = +0.0480. Since f only depends on N in this case, equation 9.1 implies that
U[ f ] = (d f N )2 = d f N = 0.05 (9.14)
If we were to naively apply the weakest-link rule anyway we would estimate that since the
fractional uncertainty in N is 15,000 / 305,000 ª 0.05, the fractional uncertainty in f = ln N would
also be 5%. This would lead us to estimate the uncertainty of f to be 0.05(12.63) ª 0.63, which is
more than 10 times larger than the more correct calculation given by equation 9.14. This illustrates
our earlier statement that the weakest-link rule does poorly when f involves logarithms.
9. Propagation of Uncertainties 86
EXERCISES
Exercise 9.1
A person is measured to run a distance of 100.00 m ± 0.05 m in a time of 11.52 s ± 0.08 s.
What is the person’s speed and the uncertainty of this speed according to the weakest link rule?
Exercise 9.2
A spherical balloon has a radius of 0.85 m ± 0.01 m. How many cubic meters of gas does it
contain, and what is the uncertainty in your result?
9. Propagation of Uncertainties 87
Exercise 9.3
Imagine that you want to estimate the amount of gas burned by personal cars every year in the
U.S. You estimate that there are an average of about 0.7 ± 0.4 cars per person in the U.S., that
there are 275 million ± 30 million people in the U.S. currently, that a car is driven on the aver-
age about 15,000 mi ± 3,000 mi a year, and that the average number of miles per gallon that a
car gets is about 23 mi/gal ± 5 mi/gal. What is the approximate amount of gas burned and what
is the approximate uncertainty of this estimate?
Exercise 9.4
Equation 9.10 suggests that rather than dropping the other uncertainties entirely (as the weakest
link rule suggests) perhaps we would get a more accurate estimate of the fractional uncertainty
in a power-law relation by multiplying the fractional uncertainty of each variable by its power,
squaring the result, adding the squares and taking the square root of the sum. Do this for the
case described in Exercise 9.3 above. Is the answer you get from doing this careful way much
different from just using the weakest link result? Suppose that you do some research that ena-
bles you to reduce the fractional uncertainty in the all quantities but the worst one to 1%. Does
reduce the uncertainty much? If you really want to improve the uncertainty, what would be the
variable to focus on, and why?
9. Propagation of Uncertainties 88
11
THE UNCERTAINTY OF THE MEAN
“ ‘Now, for instance, it was reckoned a remarkable thing that at the last party in my
rooms, that upon an average we cleared about five pints a head.’”
--- Northanger Abbey
We define the uncertainty of the mean Um of a set of N measurements to be that value such
that we are 95% confident that ±Um encloses the “true value” of the measurement (where is the
mean of the set). If were to take enough data so that we have many sets of N measurements, (as
we did in the Speed of Sound lab), then this definition means operationally that if Um has the cor-
rect value, the range ± Um should enclose the true value for 95% of the sets’ means. According to
the mathematicians, we can estimate the uncertainty Um of the mean of a normally distributed set
of measurement using just one set of N measurements as follows:
ts
Um ª (11.1)
N
where s is the standard deviation of the set and t is the Student t-factor. In the Uncertainty of the
Mean lab, you should have verified that this expression was at least approximately consistent with
the definition of uncertainty just described.
Note that the estimate of the uncertainty of the mean given by equation 11.1 has two proper-
ties that we know (or at least intuitively expect) to be true. First, it implies that the uncertainty of
the mean is indeed smaller than the uncertainty of a single measurement (by the factor 1/ N ), as
we would expect from the argument given in the previous section. Second, the more measurements
we take, the smaller Um becomes, implying that the mean becomes a better estimate of the meas-
urement’s true value as the number of measurements it embraces increases (as we should expect).
lations, but you should be sure to use the formal method when doing the final calculations that you
will report in an analysis summary or full report.
The program RepDat (see chapter 3) also calculates the uncertainty of the mean of a set of
measurements using the formal method (as you can easily check). This is probably the fastest way
to do the formal method if the program is available.
The first condition excludes intrinsically unrepeatable one-time events, and the second excludes the
other two cases. (To decide whether you can be surprised in a given situation, you should prob-
ably take the measurement at least twice and find out.)
EXERCISES
Exercise 11.1
Which of the following measurements are repeatable? Explain your response.
(a) Measuring the length of a sheet of paper with a ruler.
(d) Measuring how long it takes a rock to drop 5 ft using a watch with a sweep second hand.
Exercise 11.2
Consider the set of time measurements given in exercise 3.3 in chapter 3 of this lab reference
guide. What is the mean of these measurements, and what is its uncertainty? Calculate the latter
using both the formal method and the range method. Are the results about the same?
13
EXPONENTIAL CURVE FITTING
13.1 INTRODUCTION
Many processes in nature have exponential dependencies. The decay with time of the ampli-
tude of a pendulum swinging in air, the decrease in time of the temperature of an object that is ini-
tially warmer than its surroundings, and the growth in time of an initially small bacterial colony are
all processes that are well-modeled by exponential relationships.
To better consider the issues involved in dealing with such relationships, let’s consider a very
specific case. The absorption of radiation by a given thickness of some material can be modeled by
the following simple exponential relationship:
R( x ) = R0 e b x (13.1)
Here R(x) is the count rate of radiation particles (typically measured as the number of clicks on a
Geiger counter that take place in some fixed time such as one minute), R0 is the count rate with no
shielding present, x is the thickness of the shielding material, and b is a negative constant that de-
scribes how rapidly the count rate decreases as the shielding thickness increases.
Some measurements on the rate at which radiation particles emitted by 55 Fe are detected
when a Geiger counter is shielded one or more thin sheets of aluminum foil appear in Table 13.1
and Figure 13.1. Note that the count rate decreases as the thickness of the aluminum shielding in-
creases. (Note also that the error bars in this case are just barely large enough to be visible.)
While the count rate is clearly not a linear function of the shielding thickness x, you could not
(just by looking at the graph) tell the difference between the exponential dependence of equation
13.1 and certain power laws. Finding the value of the constant b would be difficult as well. You
could try different values of b and R0 , calculate an R(x) curve for each pair of values, and then see
which pair best matches your experimental data, but this approach would clearly be very tedious.
13. Exponential Curve Fitting 106
6.0
5.0
0 0.002 0.004 0.006 0.008 0.01
Thickness of Al (cm)
Furthermore, the slope of this line is b, the value of the constant in the original exponential
equation. You find the value of b by calculating the slope in the usual way. That is,
Dy y –y ln R( x2 ) – ln R( x1 )
b = = 2 1 = (13.3)
Dx x2 – x1 x2 – x1
Now, the data graphed above don't lie perfectly on a straight line, which is the result of ex-
perimental uncertainty. The line, however, looks like a good approximation to the data. The best fit
line, however, seems to pass through the points (0 cm, 8.0) and (0.01 cm, 5.1). [Remember that,
strictly speaking, both the y values should have the unit terms of ln(counts/minute) added, but
those terms will cancel when you do the subtraction.] The slope of this line is
5.1 – 8.0
b = = – 290 cm –1 (13.4)
0.01 cm – 0 cm
The point (0 cm, 8.0) where the line intersects the y axis, is (of course) the y intercept of the
line. So the value of y(0) is the constant ln R0 in equation 13.4. Therefore, R0 is the antilog
(exponential) of 8.0, or 2980 counts/minute. Therefore, we can use the ln R vs. x graph to deter-
mine both the constants in equation 13.2: that equation now reads:
EXERCISES
Exercise 13.1
Verify that if b = 290 cm –1 , then b also equals 29 mm –1 , as claimed below equation 13.7.
Exercise 13.2
Figure 13.5 on the next page shows a semi-log graph of the radiation data, with error bars.
Draw the line that you think best fits your data, and then find values of b and R0 from your
line. Since Figure 13.5 is larger than Figure 13.2, your new estimates of b and R0 will prob-
ably be better than the earlier estimates summarized in equation 13.7. [Hint: You will have to
calculate ln R for two points points on the line to accurately determine the slope, but you can
read R0 right from the diagram.
Exercise 13.3
On the blank semi-log paper provided in Figure 13.6,
time t (min) Number of bacteria N
plot the data given in the table to the right. Determine
whether this data seems to reflect an exponential relation- 10 149,000 ± 15,000
ship N = N0 e b t , and if so, find the values of b and N0 20 215,000 ± 20,000
that best fit this data from both graphs. Also, plot in your 30 335,000 ± 35,000
lab notebook a graph of ln N versus t on ordinary graph 40 477,000 ± 45,000
paper and do the same analysis. (You can use equation 50 769,000 ± 75,000
9.16 in Chapter 9 of this manual to compute the uncer-
tainty in ln N .)
13. Exponential Curve Fitting 109