0% found this document useful (0 votes)
6 views

Unit-1 Measurement and Error

Unit I of GI5501 covers concepts of measurement and error, including types of errors, reliability of measurements, and error propagation. It discusses direct and indirect measurements, the importance of redundancy in observations, and various statistical methods for analyzing data. The unit emphasizes the significance of adjustments in measurements to improve accuracy and the role of probability in understanding measurement errors.

Uploaded by

thivee
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unit-1 Measurement and Error

Unit I of GI5501 covers concepts of measurement and error, including types of errors, reliability of measurements, and error propagation. It discusses direct and indirect measurements, the importance of redundancy in observations, and various statistical methods for analyzing data. The unit emphasizes the significance of adjustments in measurements to improve accuracy and the role of probability in understanding measurement errors.

Uploaded by

thivee
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 126

GI5501

SPATIAL DATA ADJUSTMENT


UNIT I MEASUREMENT AND ERROR
Unit I MEASUREMENT AND ERROR
Concepts of measurement and Error
Types of errors
Elementary concepts in probability
Reliability of measurement
Significant figures
Error Propagation
Linearization
Multivariate distribution
Error ellipse
Weights of an observation
Stochastic model and Functional model.
Computational Adjustment
Adjustment is a process of making measured values of a
quantity more accurate before they are used in the
computations for the determination of points position that are
associated with the measurements.
Mea‧sure to find the size, length, angles or amount of
something (unknown quantity), using standard units.
Ob‧serve Is to see and notice something
Ob‧ser‧va‧tion is the process of watching something or
someone carefully for a period of time.
Com‧pute is to calculate a result, answer, sum etc.
Com‧pu‧ta‧tion Is the process of calculating or the result
of calculating.
Computational Adjustment
Adjust
To change or move something slightly to improve it or make it
more suitable for a particular purpose.
To change (something) to make it more correct; so that it fits,
corresponds or conforms to desired conditions.

Ad‧just‧ment is a small change made to a machine, system, or


Calculation.

Adjustment is a process of distributing errors (random) in


measurements or observations so that they conform to certain
geometrical conditions (such as misclosure) .
Concepts of Measurement and Error
Measurements are defined as
- observations made to determine unknown quantities.
- The process of estimating the magnitude of some attribute of
an object relative to a standard unit.
- The application of a device or apparatus for the purpose of
determining and unknown quantity

It may be classified as
• Direct Measurement
• Indirect Measurement
Concepts of Measurement and Error
Direct measurements are made by applying an instrument
directly to the unknown quantity and observing its value, usually
by reading it directly from graduated scales on the device.
Determining the distance between two points by making a direct
measurement using a graduated tape.
Measuring an angle by making a direct observation from the
graduated circle of a theodolite or total station instrument, are
examples of direct measurements.
Concepts of Measurement and Error
Indirect measurements are obtained when it is not possible or
practical to make direct measurements. In such cases the
quantity desired is determined from its mathematical
relationship to direct measurements.

Surveyors may, measure angles and lengths of lines between


points directly and use these measurements to compute station
coordinates. From these coordinate values, other distances and
angles that were not measured directly maybe derived indirectly
by computation.
Concepts of Measurement and Error
• One measurement is not a measurement
• Good practice in surveying leads to perform more
measurements than necessary. Those extra measurements
help to provide effective control on the results.
• Distance measurements back and forward (traverse network)
• Measuring the 3 angles of a triangle ( the sum must be 180 degrees)
• Levelling loops (the sum must be zero)
• Left and right side telescope measurements (to check the effect of
small misalignments of mechanical components of a Total Station into
the measurements)
• Those “extra” measurements aim to :
• Detect outliers in the observations (gross errors)
• Check other measurements.
• Provide consolidated results (best estimate)
• Judge of QUALITY is Reliability
Concepts of Measurement and Error
MEASUREMENT ERROR SOURCES
(1) No measurement is exact,
(2) Every measurement contains errors,
(3) The true value of a measurement is never known
(4) The exact sizes of the errors present are always unknown.

Introduction to errors
A typical survey measurements may involve such
operations like centering, pointing, setting and reading. Due to
some human limitations, imperfection in instruments,
environmental changes or carelessness ; certain amount of
errors are produced.
The error in measured quantities should be eliminated
before they used in computing other quantities.
Concepts of Measurement and Error
Examples of mistakes include
(a) forgetting to set the proper parts per million
(ppm) correction on an EDM instrument, or
failure to read the correct air temperature,
(b) mistakes in reading graduated scales,
(c) blunders in recording (i.e., writing down
27.55 for 25.75). Mistakes are also known as
blunders or gross errors
Types of Error
Systematic errors additional examples are

(a) temperature not being standard while taping,


(b) an index error of the vertical circle of a theodolite or total
station instrument,
(c) use of a level rod that is not of standard length.

Corrections for systematic errors can be computed and applied


to observations to eliminate their effects.
Systematic errors are also known as biases.
Types of Error
Types of Error
Random errors. These are the errors that remain after all
mistakes and systematic errors have been removed from the
measured values.
They are the result of human and instrument imperfections.
They are generally small and are as likely to be negative as
positive.
They usually do not follow any physical law and therefore must
be dealt with according to the mathematical laws of probability.
Examples of random errors are
(a) imperfect centering over a point during distance
measurement with an EDM instrument,
(b) bubble not centered at the instant a level rod is read,
(c) small errors in reading graduated scales. It is impossible to
avoid random errors in measurements entirely.
Types of Error
Types of Error
Error and Correction

The true value is never


known
Most probable value ()
The most probable value for a measured quantity which, based
on the observations, has the highest probability of occurrence.
It is derived from a sample set of data rather than the population
and is simply the mean if the repeated measurements have the
same precision.
Residual Error (ν)
The difference between any individual measured quantity and the
most probable value for that quantity. Residuals are the values
that are used in adjustment computations since most probable
values can be determined. The term error is frequently used when
residual is meant, and although they are very similar and behave
in the same manner, there is this theoretical distinction. The
mathematical expression for a residual is
νi = -xi
th
Residual Error (ν)
Elementary concepts in
Probability
GRAPHICAL REPRESENTATION OF DATA
An ordered numerical tabulation of data allows for some data distribution
analysis, it can be improved with a frequency histogram, usually called
simply a histogram. Histograms are bar graphs that show the frequency
distributions in data.
To create a histogram, the data are divided into classes. These are
subregions of data that usually have a uniform range in values, or class
width.
The histogram class width (range of data represented by each histogram
bar) is determined by dividing the total range by the selected number of
classes.
Elementary concepts in
Probability

Figure 2.2 shows several possible histogram shapes. Figure 2.2(a) depicts a
histogram that is symmetric about its central value with a single peak in the
middle. Figure 2.2(b) is also symmetric about the center but has a steeper
slope than Figure 2.2(a), with a higher peak for its central value. Assuming
the ordinate and abscissa scales to be equal, the data used to plot Figure
2.2(b) are more precise than those used for Figure 2.2(a). Symmetric
histogram shapes are common in surveying practice as well as in many other
fields. In fact, they are so common that the shapes are said to be examples
of a normal distribution. Figure 2.2(c) has two peaks and is said to be a
bimodal histogram. In the histogram of Figure 2.2(d), there is a single peak
with a long tail to the left. This results from a skewed data set, and in
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Population - A population consists of all possible measurements
that can be made on a particular item or procedure. Often, a
population has an infinite number of data elements.
Sample - A sample is a subset of data selected from the
population.
Elementary concepts in
Probability
NUMERICAL METHODS OF DESCRIBING DATA
Numerical descriptors are values computed from a data set that
are used to interpret its precision or quality. Numerical
descriptors fall into three categories:
(1)measures of central tendency
(2)measures of data variation
(3)measures of relative standing
These categories are all called statistics. Simply described, a
statistic is a numerical descriptor computed from sample data.
Elementary concepts in
Probability
MEASURES OF CENTRAL TENDENCY
Measures of central tendency are computed statistical quantities that give
an indication of the value within a data set that tends to exist at the
center. The arithmetic mean, the median, and the mode are three such
measures. They are described as follows:
ARITHMETIC MEAN
For a set of n observations, y1, y2,..., yn, this is the average of the
observations. Its value, is computed from the equation. The symbol is
used to represent a sample’s arithmetic mean and the symbol μ is used to
represent the population mean.
Elementary concepts in
Probability
MEDIAN
As mentioned previously, this is the midpoint of a sample set when
arranged in ascending or descending order. One-half of the data are above
the median and one-half are below it. When there are an odd number of
quantities, only one such value satisfies this condition. For a data set with
an even number of quantities, the average of the two observations that
straddle the midpoint is used to represent the median.
Elementary concepts in
Probability
MODE
Within a sample of data, the mode is the most frequently
occurring value. It is seldom used in surveying because of the
relatively small number of values observed in a typical set of
observations. In small sample sets, several different values may
occur with the same frequency, and hence the mode can be
meaningless as a measure of central tendency.
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
MEASURES OF RELATIVE STANDING
The three basic measures of relative standing are the z-score (also called
the standard score), the percentiles (and their percentile rank) and
quartiles
A Z-score is a numerical measurement that describes a value's relationship
to the mean of a group of values. Z-score is measured in terms of standard
deviations from the mean. If a Z-score is 0, it indicates that the data point's
score is identical to the mean score.
Elementary concepts in
Probability
MEASURES OF RELATIVE STANDING
Elementary concepts in
Probability
MEASURES OF
RELATIVE STANDING
Elementary concepts in
Probability
DEGREES OF FREEDOM / REDUNDANT OBSERVATIONS
The number of observations that are in excess of the number
necessary to solve for the unknowns. In other words, the number of
degrees of freedom equals the number of redundant observations. As
an example, if a distance between two points is measured three times,
one observation would determine the unknown distance and the other
two would be redundant. These redundant observations reveal the
discrepancies and inconsistencies in observed values. This, in turn,
makes possible the practice of adjustment computations for obtaining
the most probable values based on the measured quantities.
Redundant observations are observations that exceed the
minimum number needed to determine an unknown.
Redundant observations allow the detection of random error and
adjustment be made to get a final or most probable value (MPV) for
the unknown.
Elementary concepts in
Probability
DEGREES OF FREEDOM / REDUNDANT OBSERVATIONS
A least squares adjustment is said to contain redundancy if the
total number of measurements exceeds the minimum number required
to compute the unknown parameters (i.e. when the degrees of
freedom is greater than zero). When repeated measurements are
taken to estimate an unknown parameter, the
additional measurements are said to be redundant.
Elementary concepts in
Probability
DEGREES OF FREEDOM / REDUNDANT OBSERVATIONS
The following base lines are observed with GNSS for network adjustment.
Find the redundant observation for the survey network.
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability

data. Residuals are used rather than errors because they can
be calculated from most probable values, whereas errors
cannot be determined. For a sample data set, 68.3% of the
observations will theoretically lie between the most probable
Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
From multiple measurements to an unique result (best linear unbiased
estimator)
Serie 1 Serie 2
Serie 1 Serie 2
1 1.456 1.456
1 1.456 1.456 2 1.567 1.457
2 1.567 1.457 3 1.345 1.453
3 1.345 1.453 4 1.678 1.453
4 1.678 1.453 5 1.453 1.467
6 1.456 1.459
5 1.453 1.467
7 1.467 1.453
6 1.456 1.459
8 1.678 1.452
7 1.467 1.453
9 1.567 1.459
8 1.678 1.452 MEAN 1.519 1.457
9 1.567 1.459 STDEV 0.112 0.005
MEDIAN 1.467 1.456
What series has the more “precise” MIN 1.345 1.452
measurement ? MAX 1.678 1.467
D Min-Max 0.333 0.015
How will you consider a new 10th
Elementary concepts in
Probability
Using the data from Table 2.2, determine the sample set’s mean, median,
and mode and the standard deviation. Also plot its histogram. (Recall
that the data of Table 2.2 result from the seconds’ portion of 50
theodolite directions.)
Elementary concepts in
Probability
PROBLEMS
1. The optical micrometer of a precise differential level is set and read 10
times as 8.801, 8.803, 8.798, 8.801, 8.799, 8.802, 8.802, 8.804,
8.800, and 8.802. What value would you assign to the operator’s
ability to set the micrometer on this instrument?
2. An EDM instrument and reflector are set at the ends of a baseline that
is 400.781 m long. Its length is measured 24 times, with the following
results:
400.787 400.796 400.792 400.787 400.787 400.786 400.792 400.794
400.790 400.788 400.797 400.794 400.789 400.785 400.791 400.791
400.793 400.791 400.792 400.787 400.788 400.790 400.798 400.789
(a) What are the mean, median, and standard deviation of the data?
(b)Construct a histogram of the data with five intervals and describe
its properties. On the histogram, lay off the sample standard
deviation from both sides of the mean.
(c) How many observations are between S, and what percentage of
Elementary concepts in
Probability
Elementary concepts in Probability
Elementary concepts in Probability
Elementary concepts in Probability
Elementary concepts in Probability
ANALYSIS OF DIRECT REPEATED OBSERVATIONS

 Numerical/ Statistical method (mean, median, mode, standard


deviation)
 Graphical representation (scatterplot, frequency histogram)
Elementary concepts in Probability
I. Probable error(Es ):

II. Probable error of the mean (Em)

III. Probable error of a sum =

IV. Mean square error (m.s.e)


Elementary concepts in
Probability
Elementary concepts in
Probability
Elementary concepts in
Probability
Reliability of measurement
Reliability refers to how consistently a method measures
something. If the same result can be consistently achieved by
using the same methods under the same circumstances, the
measurement is considered reliable.
Several terms used to express the Reliability of measurements.
Three common terms are
• Precision
• Accuracy
• Uncertainty
Reliability of measurement
Accuracy is the measure of the absolute nearness of a
measured quantity to its true value. Since the true value of a
quantity can never be determined, accuracy is always an
unknown. Accuracy includes not only the effects of random
errors but also any bias due to uncorrected systematic errors. If
there is no bias, the standard deviation can also used as a
measure of accuracy.
Reliability of measurement
Precision is the degree of consistency between observations
based on the sizes of the discrepancies in a data set. The degree
of precision attainable is dependent on the stability of the
environment during the time of measurement, the quality of the
equipment used to make the observations, and the observer’s
skill with the equipment and observational procedures. Precision
is indicated by the dispersion or spread of the probability
distribution. A common measure of precision is the standard
deviation (σ). The higher the precision, the lower is the value of
standard deviation (σ) and vice versa,
Reliability of measurement
Reliability of measurement
Uncertainty is the range with in which it is expected the error
of a measurement with fall. A specified level of probability is
generally associated with an uncertainty. The 90% uncertainty
is the range of values with in it is 90% probable error (ie. The
probability is 0.90) of the measurement will fall. If the
uncertainty of a measurement is known, it should accompany
the measured values.
Reliability of measurement
Significant figures
Significant figures are used to establish the number which is presented
in the form of digits. These digits carry a meaningful representation of
numbers. The term significant digits are also used often instead of figures.
We can identify the number of significant digits by counting all the values
starting from the 1st non-zero digit located on the left. For example, 12.45
has four significant digits.

Definition
The significant figures of a given number are those significant or important
digits, which convey the meaning according to its accuracy. For example,
6.658 has four significant digits. These substantial figures provide precision
to the numbers. They are also termed as significant digits.
Significant figures
Rules for Significant Figures
• All non-zero digits are significant. 198745 contains six significant digits.
• All zeros that occur between any two non zero digits are significant. For
example, 108.0097 contains seven significant digits.
• All zeros that are on the right of a decimal point and also to the left of a
non-zero digit is never significant. For example, 0.00798 contained three
significant digits.
• All zeros that are on the right of a decimal point are significant, only if, a
non-zero digit does not follow them. For example, 20.00 contains four
significant digits.
• All the zeros that are on the right of the last non-zero digit, after the
decimal point, are significant. For example, 0.0079800 contains five
significant digits.
• All the zeros that are on the right of the last non-zero digit are significant
if they come from a measurement. For example, 1090 m contains four
significant digits.
Significant figures
Significant figures
Significant figures
Rounding Significant Figures
Significant figures
Significant figures
Error Propagation
The unknown values are often
determined indirectly by making direct
measurements of other quantities
which are functionally related to the
desired unknowns. Since all quantities
that are measured directly contain
errors, any values computed from
them will also contain errors. This
intrusion, or propagation, of errors that
occurs in quantities computed from
direct measurements is called error
propagation.
Evaluation of the errors in the
computed quantities as function of
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Error Propagation
Linearization
Error Propagation
Error Propagation
Error Propagation
Error Propagation
If errors are given in standard deviation
Error Propagation
Error Propagation
Error Propagation
Error Propagation

Problem
In running a line of levels, 18 instrument setups are required, with a
backsight and foresight taken from each. For each rod reading, the error
estimated is ±0.002 m. What is the error in the measured elevation
difference between the origin and the terminus?
Error Propagation
Error Propagation
Error Propagation
Multivariate distribution
Multivariate distributions show comparisons between two
or more measurements and the relationships among
them. For each univariate distribution with one random
variable, there is a more general multivariate
distribution. For example, the normal distribution is
univariate and its more general counterpart is the
multivariate normal distribution. While the multivariate
normal model is the most commonly used model for
analyzing multivariate data, there are many more: the
multivariate lognormal distribution, the multivariate
binomial distribution, and so on.
Multivariate distribution
A bivariate distribution is the simplest multivariate
distribution, comprised of one pair of random variables.
However, theoretically at least, you could have an infinite
number of pairs; all results from a bivariate distribution
for two pairs can be generalized to n random variables.
Multivariate distribution
Each random variable in a multivariate distribution has its own
mean and variance: there isn’t a “one size fits all” probability
density function, like you would find with a univariate
distribution. For discrete random variables, multivariate
distribution and described by joint probabilities. For continuous
random variables, the relevant univariate distribution is
extended. However, once you dive into the depths of
multivariate distributions analysis gets a little more complicated.
As multivariate analysis involves vector observations, an
understanding of the variance-covariance matrix is “vital” to
understanding multivariate normal distributions
Error ellipse
. Error ellipses are a graphical tool used to illustrate the pair-wise correlation that
exists between computed values. Using 2-D plane coordinates as an example, if the
correlation between the computed coordinates is zero, then the orientation of the
error ellipse corresponds to that of the host coordinate system. Special case: if the
correlation is zero and the standard deviations are the same for both coordinates,
then the error ellipse is a circle (and orientation is immaterial). However, if the
standard deviation is the same for both coordinates and the correlation is not zero,
then the maximum and minimum standard deviations will occur with some other
orientation
Error ellipse
Standard Error Rectangle and Error
Ellipse
One of the advantages of a least-squares
adjustment over other methods is that a byproduct
of the adjustment is not only the most probable
values for the unknown coordinates but also
standard deviations on these values
We are computing the coordinates of station B
shown in Figure 1 using an azimuth AZAB with an
uncertainty of SAz and a distance AB with an
uncertainty of SD m. Even if we consider the
coordinates of station A to be perfect, the
coordinates of B will have uncertainties
of Sx and Sy due to the errors in the azimuth and
distance.
Error ellipse
Standard Error Rectangle and Error
Ellipse
However, the standard error rectangle does not
depict the actual error in the coordinates. This is
where an error ellipse comes in. As shown in Figure
1, the standard error ellipse is bounded by the
standard error rectangle. It is the results of the
bivariate (x and y variables) distribution shown in
Figure 2(a).
If Figure 2(a) is contoured, Figure 2(b) is the
results. Figure 2(b) depicts error ellipses at various
levels of probability
Error ellipse
Standard Error Rectangle and Error
Ellipse
The components of the error ellipse are
shown in Figure 3. The t angle is the amount
of rotation that is required to make the
correlation between the (n, e) pair of
coordinates equal to zero. It is measured from
the positive N axis.
The major axis of the ellipse is the U axis and
the minor is the V axis. The length of the
semimajor of the ellipse is given by Su which
corresponds to the largest error component at
the station, and the length of the semi minor
axis of the ellipse is given by Sv which is the
Weights of an Observations
The weight of an observation is a measure of its relative worth
compared to other measurements. Weights are used to control the sizes of
corrections applied to measurements in an adjustment. The more precise
an observation, the higher its weight; in other words, the smaller the
variance, the higher the weight. From this analysis it can be stated
intuitively that weights are inversely proportional to variances. Thus, it also
follows that correction sizes should be inversely proportional to weights.

Weight of an Observation Wi =

Where is reference variance


i is variance of an observation

114
Weights of an Observations

115
Weights of an Observations

116
Weights of an Observations
A distance was measured in two parts with a 30m steel tape, 30m
metallic tape and 30m invar tape. Five repetitions were made by
each method. What is the weight of the observation? Calculate
the most probable value of the length and find the standard
deviation of the weighted mean.
Distances measured with a 30m steel tape:
186.783, 186.773, 186.786, 186.784, 186.779m
Distances measured with a 30m metallic tape:
186.778, 186.776, 186.781, 186.766, 186.789m
Distances measured with a 30m Invar tape:
186.784, 186.786, 186.787, 186.785, 186.786m 117
Weights of an Observations

118
Weights of an Observations
Inst. Telescope Direct Telescope Inverted
The following At Sight Horizontal Angle Zenith Angle Horizontal Angle Zenith Angle
readings were to º ‘ “ º ‘ “ º ‘ “ º ‘ “

observed as part D Ref 0 00 00 0 00 00


of Triangulation A 0 28 26 88 37 38 0 28 34 271 22 30
B 33 32 55 89 48 56 33 32 58 270 11 14
Survey using C 71 39 42 89 50 01 71 39 51 270 10 02
Digital Ref 0 00 02 359 59 59

Theodolite. Ref 90 00 00 90 00 00
Compute A 90 28 40 88 37 45 90 28 28 271 22 20
B 123 33 34 89 49 35 123 33 15 270 10 20
horizontal C 161 39 45 89 50 07 161 40 22 270 10 05
angles (ADB, Ref 89 59 56 90 00 02

BDC, ADC) and Ref 180 00 00 180 00 00


Zenith angle A 180 28 17 88 37 45 180 28 34 271 22 22
B 213 32 48 89 48 49 213 33 05 270 11 12
(DA, DB, DC) C 251 39 34 89 49 33 251 39 35 270 10 15
and its weight. Ref 179 59 58 180 00 03

Ref 270 00 00 270 00 00


A 270 28 19 88 37 23 270 28 34 271 22119 32
B 303 32 52 89 48 53 303 32 23 270 11 11
Stochastic model and Functional
model.
A least squares adjustment can be divided into two parts, the
stochastic and functional models.
Stochastic model
The stochastic model is the weighting model that controls the
size of the corrections applied to the observations. For independent
observations, the weight of the observation is inversely proportional
to the variance of the observation. The fundamental principle of a
least squares adjustment for observations having equal or unit
weights. The more general case of least squares adjustment
assumes that the observations have varying degrees of precision
and thus varying weights. The determination of variances, and
subsequently the weights of the observations, is known as
the stochastic model in a least squares adjustment.
Stochastic Having a random probability distribution or pattern
that may be analysed statistically but may not be predicted
Stochastic model and Functional
model. model
Stochastic
A Stochastic Model has the capacity to handle
uncertainties in the inputs applied. Stochastic models possess
some inherent randomness - the same set of parameter values
and initial conditions will lead to an ensemble of different
outputs.
The determination of variances, and subsequently the
weights of the observations, is known as the stochastic model in
a least squares adjustment. It is crucial to the adjustment to
select a proper stochastic (weighting) model, the weight of an
observation controls the amount of correction it receives during
the adjustment. However, development of the stochastic model
is important not only to weighted adjustments. When doing an
unweighted adjustment, all observations are assumed to be of
Stochastic model and Functional
model. model
Functional
A function model or functional model in systems
engineering and software engineering is a structured
representation of the functions (activities, actions, processes,
operations) within the modeled system or subject area
A functional model in adjustment computations is an
equation or set of equations/functions that represents or defines
an adjustment condition. If the functional model represents the
physical situation adequately, the observational errors can be
expected to conform to the normal distribution curve.
There are two basic forms for functional models: the
conditional and parametric adjustments.
Stochastic model and Functional
model. model
Functional
In a conditional adjustment, geometric conditions are enforced
on the observations and their residuals. Examples of conditional
adjustment are:
(1) the sum of the angles in a closed polygon is (n -
2)180º,where n is the number of sides in the polygon.
(2) the latitudes and departures of a polygon traverse sum to
zero; (3) the sum of the angles in the horizon equal 360º.
When performing a parametric adjustment, observations are
expressed in terms of unknown parameters that were never
observed directly. For example, the well-known coordinate equations
are used to model the angles, directions, and distances observed in
a horizontal plane survey. The adjustment yields the most probable
values for the coordinates (parameters), which in turn provide the
most probable values for the adjusted observations.
Stochastic model and Functional
model.
The mathematical model for an adjustment is the combination of the
stochastic model and functional model.

The functional model describes the mathematical relationship


between the GPS observations and the unknown parameters, while the
stochastic model describes the statistics of the GPS observations.
Data differencing techniques are extensively used for constructing
the functional model as they can eliminate many of the troublesome GPS
biases, such as the atmospheric bias, the receiver clock bias, the satellite
clock bias, and so on.

The stochastic and functional models must both be correct if the


adjustment is to yield the most probable values for the unknown
parameters. That is, it is just as important to use a correct stochastic
model as it is to use a correct functional model. Improper weighting of
observations will result in the unknown parameters being determined
Stochastic model and Functional
model.
Stochastic model and Functional
model.

You might also like