Error Analysis and Simulations of Complex Phenomen
Error Analysis and Simulations of Complex Phenomen
net/publication/237392909
CITATIONS READS
71 673
6 authors, including:
John Grove
Los Alamos National Laboratory
72 PUBLICATIONS 1,992 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Michael A. Christie on 19 November 2014.
Large-scale computer-based simulations are being used increasingly to predict the behavior of
complex systems. Prime examples include the weather, global climate change, the performance
of nuclear weapons, the flow through an oil reservoir, and the performance of advanced aircraft.
Simulations invariably involve theory, experimental data, and numerical modeling, all with their
attendant errors. It is thus natural to ask, “Are the simulations believable?” “How does one assess the
accuracy and reliability of the results?” This article lays out methodologies for analyzing and com-
bining the various types of errors that can occur and then gives three concrete examples of how error
models are constructed and used.
At the top of these two pages is a simulation of low-viscosity gas (purple) displacing
higher-viscosity oil (red) in an oil recovery process. Error models can be used to improve
predictions of oil production from this process. Above, at left, is a component of such an
error model, and at right is a prediction of future oil production for a particular oil reser-
voir obtained from a simple empirical model in combination with the full error model.
Reliable Predictions of needed data may be too hazardous or difficult to make with confidence
Complex Phenomena expensive, it may be forbidden as a because, although the fluid properties
matter of policy, as in the case of can be determined with reasonable
There is an increasing demand for nuclear testing, or it just may not be accuracy, the fluid flow is controlled
reliable predictions of complex phe- feasible. Confidence must then be by the poorly known rock permeability
nomena encompassing, where possi- sought through understanding of the and porosity. The rock properties can
ble, accurate predictions of full-sys- scientific foundations on which the be measured by taking samples at
tem behavior. This requirement is predictions rest, including limitations wells, but these samples represent only
driven by the needs of science itself, on the experimental and calculational a tiny fraction of the total reservoir
as in modeling of supernovae or pro- data and numerical methods used to volume, leading to significant uncer-
tein interactions, and by the need for make the prediction. This understand- tainties in fluid flow predictions. As an
scientifically informed assessments in ing must be sufficient to allow quanti- analogy of the difficulties faced in pre-
support of high-consequence decisions tative estimates of the level of accura- dicting fluid flow in reservoirs, imag-
affecting the environment, national cy and limits of applicability of the ine drawing a street map of London
security, and health and safety. For simulation, including evidence that and then predicting traffic flows based
example, decisions must be made any factors that have been ignored in on what you see from twelve street
about the amount by which green- making the predictions actually have a corners in a thick fog!
house gases released into the atmos- small effect on the answer. If, as In nuclear weapons certification, a
phere should be reduced, whether and sometimes happens, high-confidence different problem arises. The physical
for what conditions a nuclear weapon predictions cannot be made, this fact processes in an operating nuclear
can be certified (Sharp and Wood- must also be known, and a thorough weapon are not all accessible to labo-
Schulz 2003), or whether develop- and accurate uncertainty analysis is ratory experiments (O’Nions et al.
ment of an oil field is economically essential to identify measures that 2002). Since underground testing is
sound. Large-scale computer-based could reduce uncertainties to a tolera- excluded by the Comprehensive Test
simulations provide the only feasible ble level, or mitigate their impact. Ban Treaty (CTBT), full system pre-
method of producing quantitative, Our goal in this paper is to provide dictions can only be compared with
predictive information about such an overview of how the accuracy and limited archived test data.
matters, both now and for the foresee- reliability of large-scale simulations of The need for reliable predictions is
able future. However, the cost of a complex phenomena are assessed, and not confined to the two areas above.
mistake can be very high. It is there- to highlight the role of what is known Weather forecasting, global climate
fore vitally important that simulation as an error model in this process. modeling, and complex engineering
results come with a high level of projects, such as aircraft design, all
confidence when used to guide high- generate requirements for reliable,
consequence decisions. Why Is It Hard to Make quantitative predictions—see, for
Confidence in expectations about Accurate Predictions of example, Palmer (2000) for a study of
the behavior of real-world phenomena Complex Phenomena? predictability in weather and climate
is typically based on repeated experi- simulations. These often depend on
ence covering a range of conditions. We begin with a couple of examples features that are hard to model at the
But for the phenomena we consider that illustrate some of the uncertainties required level of detail—especially if
here, sufficient data for high confi- that can make accurate predictions dif- many simulations are required in a
dence is often not available for a vari- ficult. In the oil industry, predictions of design-test-redesign cycle.
ety of reasons. Thus, obtaining the fluid flow through oil reservoirs are More generally, because we are
side the initial prediction. In other and numerical errors are unlikely to the error in a prediction is often more
words, the initial estimates of reserves, scale in the same way, a calibrated difficult than making the prediction in
although probabilistic, did not capture simulation is reliable only for the the first place, and when confidence
the full range of uncertainty and were regime for which it has been shown to in the answer is an issue, it is just as
thus unreliable. This situation was match experimental data. important.
obviously a cause for concern for a In one variant of calibration, multi- A systematic approach for deter-
company with billions of dollars in ple simultaneous simulations are per- mining and managing error in simula-
investments on the line. formed with different models. The tions is to try to represent the effects
Probabilistic predictions are also “best” prediction is defined as a of inaccurate models, neglected phe-
used in weather forecasting. If the weighted average over the results nomena, and limited solution accuracy
probabilistic forecast “20 percent obtained with the different models. As using an error model.
chance of rain” were correct, then on additional observations become avail- Unlike the calibration and data
average it would have rained on 1 in 5 able, the more successful models are assimilation methods discussed above,
days that received that forecast. Data revealed, and their predictions are an error model is not primarily a
on whether or not it rained are easily weighted more heavily. If the models method of increasing the accuracy of
obtained. This rapid and repeated used reflect the range of modeling a simulation. Error modeling aims to
feedback on weather predictions has uncertainty, then the range of results provide an independent estimate of
resulted in significantly improved reli- will indicate the variance of the pre- the known inadequacies in the simula-
ability of forecasts compared with pre- diction due to those uncertainties. tion. An error model does not purport
dictions of uncertainty in oil reserves. Data assimilation, while basically a to provide a complete and precise
The comparison between the observed form of calibration, has important dis- explanation of observed discrepancies
frequency of precipitation and a proba- tinctive features. One of the most between simulation and experiment
bilistic forecast for a locality in the important is that it enables real-time or, more generally, of the differences
United States shown in Figure 2 con- utilization of data to improve predic- between the simulation model and the
firms the accuracy of the forecasts. tions. The need for this capability real world. In practice, an error model
This accuracy did not come easily, comes from the fact that, in opera- helps one achieve a scientific under-
and so we next briefly describe two of tional weather forecasting, for exam- standing of the knowable sources of
the principal methods currently used ple, there is insufficient time to restart error in the simulation and put quanti-
to improve the accuracy of predictions a run from the beginning with new tative bounds on as much of the error
of complex phenomena: calibration data, so that this information must be as possible.
and data assimilation. incorporated on the fly. In data assim-
Calibration is a procedure whereby ilation, one makes repeated correc- Simulation Errors. Computer
a simulation is matched to a particular tions to model parameters during a codes used for calculating complex
set of experimental data by perform- single run, to bring the code output phenomena combine models for
ing a number of runs in which uncer- into agreement with the latest data. diverse physical processes with algo-
tain model parameters are varied to The corrections are typically deter- rithms for solving the governing equa-
obtain agreement with the selected mined using a time series analysis of tions. Large databases containing
data set. This procedure is sometimes the discrepancies between the simula- material properties such as cross sec-
called “tuning,” and in the oil industry tion and the current observations. tions or equations of state that tie the
it is known as history matching. Data assimilation is widely used in simulation to a real-world system
Calibration is useful when codes are weather forecasting. See Kao et al. must be integrated into the simulation
to be used for interpolation, but it is (2004) for a recent application to at the lowest level of aggregation.
of limited help for extrapolation out- shock-wave dynamics. These components and, significantly,
side the data set that was used for tun- input from the user of the code must
ing. One reason for this lack of pre- be linked by a sophisticated computer
dictability is that calibration only Sources of Error and How science infrastructure, with the result
ensures that unknown errors from dif- to Analyze Them that a simulation code for complex
ferent sources, say inaccurate physics phenomena is an exceedingly elabo-
and numerics, have been adjusted to Introducing Error Models. The rate piece of software. Such codes,
compensate one another, so that the role of a thorough error analysis in while elaborate, still provide only an
net error in some observable is small. establishing confidence in predictions approximate representation of reality.
Because different physical processes has been mentioned. But evaluating Simulation errors come from three
nent that either is or appears to be Additional, independent measure- way to judge the adequacy of an
random whether the process that is the ments made with independent measur- analysis of uncertainty in a complex
subject of the measurement is random ing equipment would suggest that experiment is to repeat the experiment
or not. This component is the ubiqui- something was wrong if they were with an independent method and an
tous “noise” that arises from a wide inconsistent with these results. independent team.
variety of unwanted or uncharacter- However, the cause of the systematic Solution errors enter an analysis of
ized processes occurring in the meas- error could only be identified through simulation error in several ways. In
urement apparatus. The way in which a physical understanding of how the addition to being a direct source of
noise affects a measurement must be instruments work, including an analy- error in predictions made with a given
taken into consideration to attain valid sis of the experimental procedures and model, solution errors can bias the con-
conclusions based on that data. Noise the experimental environment. In this clusions one draws from comparing a
is typically treated probabilistically, example, the additional measurements model to data in exactly the same way
either separately or included with a should show that the electrical charac- that experimental errors do. Solution
statistical treatment of other random teristics of the cable were not as errors also can affect a simulation
error. However, systematic error is expected. To reiterate, the point of almost covertly: It is common for the
often both more important and more both examples is that an understand- data or the code output to need further
difficult to deal with than random ing of the systematic error in a meas- processing before the two can be
error. It is also frequently overlooked, ured quantity requires an analysis that directly compared. When this process-
or even ignored. is independent of the instrument used ing requires modeling or simulation
To see how a systematic error can for the original measurement. with a different code, then the solution
occur, imagine that an opinion poll on An example of how difficult it can error from that calculation can affect
the importance of education was con- be to determine uncertainties correctly the comparison. As with experimental
ducted by questioning people on street is shown in Figure 3, a plot of esti- errors, solution errors must be deter-
corners “at random”—not knowing mates of the speed of light vs the date mined independently of the simulations
that many of them were coming and of the measurement. The dotted line that are being used for prediction.
going from a major library that hap- shows the accepted value, and the
pened to be located nearby. It is virtu- published experimental uncertainties
ally certain that those questioned are shown as error bars. The length of Using Data to
would on average place a higher the error bars—1.48 times the stan- Constrain Models
importance on education than the pop- dard deviation—is the “90 percent
ulation in general. Even if a very large confidence interval” for a normally The scientific method uses a cycle
number of those individuals were distributed uncertainty for the experi- of comparison of model results with
questioned, an activity that would mental error; that is, the experimental data, alternating with model modifica-
result in a small statistical sampling error bars will include the correct tion. A thorough and accurate error
error, conclusions about the impor- value 90 percent of the time if the analysis is necessary to validate
tance of higher education drawn from uncertainty were assessed correctly. It improvements. The availability of
this data could be incorrect for the is evident from the figure, however, data is a significant issue for complex
population at large. This is why care- that many of the analyses were inade- systems, and data limitations permeate
fully conducted polls seek to avoid quate: The true value lies outside the efforts to improve simulation-based
systematic errors, or biases, by ensur- error bars far more often than 10 per- predictions. It is therefore important
ing that the population sampled is rep- cent of the time. This situation is not to use all relevant data in spite of dif-
resentative. uncommon, and it provides an exam- ferences in experiment design and
As a second example, suppose that ple of the degree of caution appropri- measurement technique. This means
10 measurements of the distance from ate when using experimental results. that it is important to have a proce-
the Earth to the Sun gave a mean The analysis of experimental error dure to combine data from diverse
value of 95,000,000 miles due, say, to is often quite arduous, and the rigor sources and to understand the signifi-
flaws in an electric cable used in mak- with which it is done varies in prac- cance of the various errors that are
ing these measurements. How would tice, depending on the importance of responsible for limitations on pre-
someone know that 95,000,000 miles the result, the accuracy required, dictability.
is the wrong answer? This error could whether the measurement technique is The way in which the various cate-
not be revealed by a statistical analy- standard or novel, and whether the gories of error can affect comparison
sis of only those 10 measurements. result is controversial. Often, the best with experimental data and the steps
Decomposition of Errors. Our choices—are small enough to ensure with a slightly different condition than
ability to predict any physical phe- that predictions of the phenomena of the one for which the experiment was
nomenon is determined by the accura- interest can be made with sufficient designed, as shown in Figure 4.
cy of our input data and our modeling precision for the task at hand. This The three steps below could serve
approach. When the modeling input means that simpler techniques are as an initial, deterministic assessment
data are obtained by analysis of often appropriate at the start of a of the discrepancy between simulation
experiments, the experimental error study to ensure that we are operating and experiment.
and modeling error (solution error with the required level of precision. Step 1. Compare Simulated and
plus physics approximations) terms The discrepancy between simula- Experimental Results. The size of the
control the accuracy of our estimation tion results and experimental data is measurement error will obviously
of those data, and hence our ability to illustrated in Figure 4, which shows affect the conclusions drawn from the
predict. Because a full uncertainty- the way in which this discrepancy can comparison. Those conclusions can
quantification study is in itself a com- be related to measurement errors and also be affected by the degree of
plex process, it is important to ensure solution errors. Note that the experi- knowledge of the actual as opposed to
that those errors whose size can be mental conditions are also subject to the designed experimental conditions.
controlled—either by experimental uncertainties. This means that the For example, the as-built composition
technique or by modeling/simulation observed value may be associated of the physical parts of the system
under investigation may differ slightly tered in full system operation. that capture most of the variability.
from the original design. The effects Nevertheless, because the need to pre- The principle that underlies many
of both of these errors are typically dict integral quantities motivates the of these techniques is that, for a com-
reported together, but they are explic- development and use of simulation, a plex engineering system to be reli-
itly separated here because error in crucial test of the “correctness” of a able, it should not depend sensitively
the experimental conditions affects simulation is that it consistently and on the values of, for example, 104 or
the simulated result, as well as the accurately matches all available data. more parameters. This is as true for a
measured result, as can be seen in weapon system that is required to
Figure 4. operate reliably as it is for an oil field
Step 2. Evaluate Solution Errors. If Statistical Prediction that is developed with billions of dol-
the error is a simple matter of numeri- lars of investment funds.
cal accuracy—for example, spatial or A major challenge of statistical
temporal resolution—then the error is prediction is assessing the uncertainty Statistical Inference—The
a fixed, determinable number in prin- in a predicted result. Given a simula- Bayesian Framework. The Bayesian
ciple. In other cases—for example, tion model, this problem reduces to framework for statistical inference
subgrid stochastic processes—the the propagation of errors from the provides a systematic procedure for
error may be knowable in only a sta- simulation input to the simulated updating current knowledge of a sys-
tistical sense. result. One major problem in examin- tem on the basis of new information.
Step 3. Determine Impact on ing the impact of uncertainties in In engineering and natural science
Predictability. If the discrepancy is input data on simulation results is the applications, we represent the system
large compared with the solution error “curse of dimensionality.” If the prob- by a simulation model m, which is
and experimental uncertainty, then the lem is described by a large number of intended to be a complete specifica-
model must be improved. If not, the input parameters and the response sur- tion of all information needed to solve
model may be correct, but in either face is anything other than a smooth a given problem. Thus m includes the
case, the data can be used to define a quasilinear function of the input vari- governing evolution equations (typi-
range of modeling parameters that is ables, computing the shape of the cally, partial differential equations) for
consistent with the observations. If that response surface can be intractable the physical model, initial and bound-
range leads to an uncertainty in predic- even with large parallel machines. For ary conditions, and various model
tion that is too large for the decision example, if we have identified 8 criti- parameters, but it would not generally
being taken, the experimental errors or cal parameters in a specific problem include the parameters used to specify
solution errors must be reduced. and can afford to run 1 million simu- the numerical solution procedure
A significant discrepancy in step 1 lations, we can resolve the response itself. Any or all of the information in
indicates the presence of errors in the surface to an accuracy of fewer than m may be uncertain to some degree.
simulation and/or experiment, and 7 equally spaced points per axis. To represent the uncertainty that may
steps 2 and 3 are necessary, but not Various methods exist to assess the be present in the initial specification
Figure 5. Bayesian Framework for It is important to realize that the measurements. The likelihood is
Predicting System Performance Bayesian procedure does not deter- defined by assigning probabilities to
with Relevant Uncertainties mine the choice of p(m). Thus, in solution and/or measurement errors
Multiple simulations are performed using Bayesian analysis, one must of different sizes. The required prob-
using the full physical range of parame- supply the prior from an independent ability models for both types of
ters. The discrepancies between the data source or a more fundamental errors must be supplied by an inde-
observation and the simulated values
theory, or otherwise, one must use a pendent analysis.
are used in a statistical inference proce-
dure to update estimates of modeling
noninformative “flat” prior. This discussion shows that the role
and input uncertainties. The update The factor p(O|m) in Equation (1) of the likelihood in simulation-based
involves computing the likelihood of the is called the likelihood. The likeli- prediction is to assign a weight to a
model parameters by using Bayes’ theo- hood is the (unnormalized) condi- model m based on a probabilistic
tional probability for the observation measure of the quality of the fit of the
O, given the model m. In the cases of
rem. The likelihood is computed from a
probability model for the discrepancy, model predictions to data. Probability
taking into account the measurement interest here, model predictions are models for solution and measurement
errors (shown schematically by the
determined by solutions s(m) of the errors play a similar role in determin-
green dotted lines) and the solution
governing equations. The simulated ing the likelihood.
observables are functionals O(s(m))
errors (blue dotted lines). The updated
parameter values are then used to pre-
This point is so fundamental and
of s(m). If both the experimentally sufficiently removed from common
measured observables O and the solu-
dict system performance, and a deci-
approaches to error analysis that we
tion s(m), hence O(s(m)), are exact,
sion is taken on whether the accuracy
of the predictions is adequate. repeat it for emphasis: Numerical
the likelihood p(O|m) is a delta func- and observation errors are the lead-
parameters vary within an interval sources of information—even the different components of the out-
(known exactly), but that the distri- those that provide only indirect put. This means that each of the like-
bution of possible values of the information. lihood terms will have its own solu-
parameter within the interval is not In principle, an analysis can uti- tion error, as well as its own observa-
known even in a probabilistic sense. lize any experimental data that can tion error. The relative sizes of these
This method yields error bars but not be compared with some part of the errors greatly affect how these vari-
confidence intervals. output of a simulation. To understand ous data sources constrain θ. For
An illustration of the Bayesian this point, let us make the simple and example, if it is known that m2(θ )
does not reliably simulate O2, then
ily of possible models M can be
framework we follow to compute the often useful assumption that the fam-
impact of solution error and experi- the likelihood should reflect this fact.
mental uncertainty is shown in indexed by a set of parameters . In Note that a danger here is that a mis-
Figure 5. Multiple simulations are this case, the somewhat abstract specification of a likelihood term
performed with the full physical specification of the prior as a proba- may give some data sources undue
range of parameters. The discrepan- bility distribution p(m) on models influence in constraining possible
cies (between simulation and obser- can be thought of simply as a proba- values of one of the parameters θ.
vation) are used in a statistical infer- bility distribution p(θ) on the param- In some cases, one (or more) com-
ence procedure to update estimates of eters θ. Depending on the applica- ponent (components) of the observed
modeling and input uncertainties. tion, θ may include parameters that data is (are) not from the actual
These updated values are then used describe the physical properties of a physical system of interest, but from
to predict system performance, and a system, such as its equation of state, a related system. In such cases,
decision is taken on whether the or that specify the initial and bound- Bayesian hierarchical models can be
accuracy of the predictions is ary conditions for the system, to used to borrow strength from that
adequate. mention just a few examples. In any data by specifying a prior model that
of these cases, uncertainty in θ incorporates information from the
affects prediction uncertainty. different systems. See Johnson et al.
Combining Information from Typically, different data sources will (2003) for an example.
Diverse Sources give information about different Finally, expert judgment usually
parameters. plays a significant role in the formu-
Bayesian inference can be extend- Multiple sources of experimental lation and use of models of complex
ed to include multiple sources of data can be included in a Bayesian phenomena—whether or not the
information about the details of a analysis by generalizing the likeli- models are probabilistic. Sometimes,
physical process that is being simu- hood term. If, for example, the expert judgment is exercised in an
lated (Gaver 1992). This information experimental observations O decom- indirect way, through selection of a
may come from “off-line” experi- pose into three components (O1, O2, likelihood model or through the
ments on separate components of the O3), the likelihood can be written as choice of the data sources to be
simulation model m, expert judg- included in an analysis. Expert judg-
ment, measurements of the actual ment is also used to help with the
physical process being simulated, choice of the probability distribution
and measurements of a physical for p(θ), or to constrain the range of
process that is related, but not identi- possible outcomes in an experiment,
cal, to the process being simulated. and such information is often
Such information can be incorporat- invoked in applications for which
ed into the inference process by experimental or observational data
using Bayesian hierarchical models, if we assume that each component of are scarce or nonexistent. However,
which can account for the nature and the data gives information about an the use of expert judgment is fraught
strength of these various sources of independent parameter θ. The sub- with its own set of difficulties. For
information. This capability is very scripts on the models are there to example, the choice of a prior can
important since data directly bearing remind us that, although the same leave a strong “imprint” on results
on the process being modeled is simulation model is used for each of inferred from subsequent experi-
often in short supply and expensive the likelihood components, different ments. See Heuristics and Biases
to acquire. Therefore, it is essential subroutines within the simulation (2002) for enlightening discussions
to make full use of all possible code are likely to be used to simulate of this topic.
(3)
Ms (ν) = 32.6986
νs (ν) = 1.33625 (the shock waves) itself; they are con-
S = 0.25 C U[.9,1.1], C = 1 centrated along the wave fronts,
Open boundary
Reflecting wall
where steep gradients in the solution
ρ=1 ρ=1
P = 10–3 P = 10–3
occur. Second, errors are generated at
ν=0 ν=0 the location of wave interactions. The
error generated by the interaction
0 0.5 1.0 1.5 2.0 2.5 increments the error in the outgoing
γ = 5/3 waves, which is inherited from errors
ρ(ν) = 3.97398 0 ≤ t ≤ 3.5 in the incoming waves.
P(ν) = 1.33725 Comparable studies have been car-
ν U[.9,1.1], ν = 1 ried out for each of the types of wave
interaction shown in Figure 13, as well
Figure 14. Initial Data for a 1-D Shock-Tube Refraction Problem
This schematic diagram is for the initial data used to conduct an ensemble of simula- as corresponding wave interactions
tions of a 1-D shock tube refraction. Each simulation consisted of a shock wave inci- that occur in spherical implosions or
dent from the left on a contact discontinuity between gases at the indicated pres- explosions (Dutta et al. 2004). An
sures and densities. Each realization from the ensemble is obtained by selecting a analysis of statistical ensembles of
shock strength consistent with a velocity v behind the incident shock taken from a such interactions has led us to suggest
–
10% uniform distribution about the mean value v = 1, and an initial contact location the following scheme for estimating
–
C chosen from a 10% uniform distribution about the mean position C = 1. In the dia- the solution errors. The key steps are
gram, S is the shock position, Ms is the shock strength, and vs is the velocity of the
(a) identification of the main wave
shock. The initial state behind the shock is set by using the Rankine-Hugoniot condi-
tions for the realization shock strength and the specified state ahead of the shock.
fronts in a flow, (b) determination of
the times and locations of wave inter-
actions, and (c) approximate evalua-
acteristics is described probabilistical- the error model on a study of an tion of the errors generated during the
ly. Of course, one will often want to ensemble of problems that reflects the interactions. Wave fronts are most
make as refined an error analysis as degree of variability one expects to simply identified as regions of large
possible within a given realization encounter in practice. Of course, the flow gradients, and the distribution of
from the ensemble (that is, a deter- choice of such an ensemble reflects the wave positions and velocities are
ministic error analysis), but there are scientific judgment and is an ongoing found by solving Riemann problems
powerful reasons for a probabilistic part of our effort. whose data are taken from ensembles
analysis to be needed as well. First, Now, let us return to the analysis of state information near the detected
you need probability to describe fea- of solution errors in elementary wave wave fronts. The error generated dur-
tures of a problem that are too com- interactions. Our work was motivated ing an interaction is fit by a linear
plex for feasible deterministic analy- by a study of a shock-contact interac- expression in the uncertainties of the
sis. Thus, fine details of error genera- tion—refer to event 1 in Figure 13. incoming wave’s strength. The coeffi-
tion in complex flows are modeled as The basic setup is shown in Figure 14, cients are computed using a least-
random, just as are some details of the which illustrates a classic shock-tube squares fit to the distribution of outgo-
operation of measuring instruments. experiment. An ensemble of problems ing wave strengths. This fitting proce-
Second, a sensitivity analysis is need- was generated by sampling from uni- dure can be thought of as defining an
ed to determine the robustness of the form probability distributions input/output relation between errors in
conclusions of a deterministic error (±10 percent about nominal values) incoming and outgoing waves.
analysis to parameter variation. To get for the initial shock strength and the A linear relation of this kind,
an accurate picture, one needs to do contact position. The solution errors which amounts to treating the errors
sensitivity analysis probabilistically, were analyzed by computing the dif- perturbatively, holds even for strong,
to answer the question of how likely ference between coarse to moderate and hence nonlinear, wave interac-
the parameter variations are that lead grid solutions and a very fine grid tions. But there are limitations.
to computed changes in the errors. solution (1000 cells). Error statistics Linearity works if the errors in the
Third, to be a useful tool, the error are shown in Figure 15 for a 100-cell incoming waves are not too large, but
model must be applicable to a reason- grid (moderate grid) solution. Two it may break down for larger errors.
able range of conditions and prob- facts about these solution errors are In the latter case, higher order (for
lems. The only way we are aware of apparent. First, the solution errors fol- example, bilinear or rational) terms
for achieving these goals is to base low the same pattern as the solution in the expansion may be needed. See
Glimm et al. (2003) for details. interactions and of waves connecting considered three grid levels, the finest
We can now explain how the com- these interactions. Moreover, each (5000 cells) defining the fiducial solu-
position law for solution errors actual- term can be computed on the basis of tion, and the other two representing
ly works. The basic idea is that errors elementary wave interactions and “resolved” (500 cells) and “under-
are introduced into the problem by does not require the full solution of resolved” (100 cells) solutions for this
two mechanisms: input errors that are the numerical problem. The final step problem. We introduced a 10 percent
present in waves that initiate the in the process is to compute the errors initial input uncertainty to define the
sequence of wave interactions—see in the output waves at event 3, by ensemble of problems to be examined.
the incoming waves for event 1 in using the input/output relations devel- The results can be summarized briefly
Figure 13—and errors generated at oped for this type of wave interaction. as follows. For the resolved case, the
each interaction site. However they This procedure represents a sub- composition law gave accurate results
are introduced, errors advect with the stantial reduction in the difficulty of for the errors (as determined by direct
flow and are transferred at each inter- the error analysis problem, and we fine-to-coarse grid comparisons) in all
action site by computable relations. must ask whether it actually works. cases: wave strength, wave width, and
Generally, waves arrive at a given Full validation requires use in practice, wave position errors. This was not the
space-time point by more than one of course. As a first validation step, we case for the under-resolved simula-
path. Referring again to Figure 13, compute the error in two ways. First, tion. Although the composition law
suppose you want to find the errors in we compute the error directly by com- gave good results for wave strength
the output waves for event 3, where paring very fine and coarse-grid simu- and wave width errors, it gave poor
the shock reflected off the wall lations for an entire wave pattern. results for wave position errors. The
reshocks the contact. On path A, the Results are shown in Figure 15. nature of these results can be under-
error propagates directly from the out- Second, we compute the error using stood in terms of a breakdown in
put of interaction 1 along the path of the composition law procedure shown some of the modeling assumptions
the contact, where it forms part of the in Figure 13. Comparing the errors used in the analysis.
input error for event 3. On path B, the computed in these two ways provides An interesting point of contrast
output error in the transmitted shock the basis for validation. emerged between the planar and
from event 1 follows the transmitted In Glimm et al. (2003) and Dutta et spherical cases. For the planar case,
shock to the wall, where it is reflected al. (2004), we carried out such valida- the dominant source of error was from
and then re-crosses the contact. In this tion studies for planar and spherical initial uncertainty, while for the spher-
way, the error coming into event 3 is shock-wave reverberation problems. ical symmetry case, the dominant
given as a sum of terms, with each As an example, for events 1 to 3 in source of error arose in the simulation
term labeled by a sequence of wave the planar problem in Figure 13, we itself, and especially from shock